00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v22.11" build number 32 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3532 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.024 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.025 The recommended git tool is: git 00:00:00.025 using credential 00000000-0000-0000-0000-000000000002 00:00:00.028 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.041 Fetching changes from the remote Git repository 00:00:00.044 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.063 Using shallow fetch with depth 1 00:00:00.063 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.063 > git --version # timeout=10 00:00:00.090 > git --version # 'git version 2.39.2' 00:00:00.090 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.123 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.123 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.616 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.627 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.639 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:02.639 > git config core.sparsecheckout # timeout=10 00:00:02.648 > git read-tree -mu HEAD # timeout=10 00:00:02.664 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:02.683 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:02.683 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:02.873 [Pipeline] Start of Pipeline 00:00:02.887 [Pipeline] library 00:00:02.889 Loading library shm_lib@master 00:00:02.889 Library shm_lib@master is cached. Copying from home. 00:00:02.905 [Pipeline] node 00:00:02.919 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.921 [Pipeline] { 00:00:02.930 [Pipeline] catchError 00:00:02.931 [Pipeline] { 00:00:02.940 [Pipeline] wrap 00:00:02.946 [Pipeline] { 00:00:02.952 [Pipeline] stage 00:00:02.953 [Pipeline] { (Prologue) 00:00:02.966 [Pipeline] echo 00:00:02.968 Node: VM-host-WFP7 00:00:02.972 [Pipeline] cleanWs 00:00:02.981 [WS-CLEANUP] Deleting project workspace... 00:00:02.981 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.987 [WS-CLEANUP] done 00:00:03.178 [Pipeline] setCustomBuildProperty 00:00:03.245 [Pipeline] httpRequest 00:00:03.646 [Pipeline] echo 00:00:03.646 Sorcerer 10.211.164.101 is alive 00:00:03.654 [Pipeline] retry 00:00:03.655 [Pipeline] { 00:00:03.667 [Pipeline] httpRequest 00:00:03.672 HttpMethod: GET 00:00:03.672 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:03.673 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:03.674 Response Code: HTTP/1.1 200 OK 00:00:03.674 Success: Status code 200 is in the accepted range: 200,404 00:00:03.675 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:03.820 [Pipeline] } 00:00:03.836 [Pipeline] // retry 00:00:03.843 [Pipeline] sh 00:00:04.125 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:04.138 [Pipeline] httpRequest 00:00:04.835 [Pipeline] echo 00:00:04.836 Sorcerer 10.211.164.101 is alive 00:00:04.844 [Pipeline] retry 00:00:04.845 [Pipeline] { 00:00:04.856 [Pipeline] httpRequest 00:00:04.859 HttpMethod: GET 00:00:04.860 URL: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:04.860 Sending request to url: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:04.861 Response Code: HTTP/1.1 200 OK 00:00:04.862 Success: Status code 200 is in the accepted range: 200,404 00:00:04.862 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:33.098 [Pipeline] } 00:01:33.112 [Pipeline] // retry 00:01:33.118 [Pipeline] sh 00:01:33.403 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:35.957 [Pipeline] sh 00:01:36.241 + git -C spdk log --oneline -n5 00:01:36.241 b18e1bd62 version: v24.09.1-pre 00:01:36.241 19524ad45 version: v24.09 00:01:36.241 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:36.241 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:36.241 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:36.261 [Pipeline] withCredentials 00:01:36.272 > git --version # timeout=10 00:01:36.286 > git --version # 'git version 2.39.2' 00:01:36.303 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:36.305 [Pipeline] { 00:01:36.315 [Pipeline] retry 00:01:36.317 [Pipeline] { 00:01:36.333 [Pipeline] sh 00:01:36.616 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:48.844 [Pipeline] } 00:01:48.857 [Pipeline] // retry 00:01:48.862 [Pipeline] } 00:01:48.873 [Pipeline] // withCredentials 00:01:48.881 [Pipeline] httpRequest 00:01:50.204 [Pipeline] echo 00:01:50.206 Sorcerer 10.211.164.101 is alive 00:01:50.216 [Pipeline] retry 00:01:50.218 [Pipeline] { 00:01:50.231 [Pipeline] httpRequest 00:01:50.236 HttpMethod: GET 00:01:50.237 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:50.238 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:50.239 Response Code: HTTP/1.1 200 OK 00:01:50.239 Success: Status code 200 is in the accepted range: 200,404 00:01:50.240 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:57.801 [Pipeline] } 00:01:57.816 [Pipeline] // retry 00:01:57.823 [Pipeline] sh 00:01:58.103 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:59.514 [Pipeline] sh 00:01:59.790 + git -C dpdk log --oneline -n5 00:01:59.790 caf0f5d395 version: 22.11.4 00:01:59.790 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:59.790 dc9c799c7d vhost: fix missing spinlock unlock 00:01:59.790 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:59.790 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:59.808 [Pipeline] writeFile 00:01:59.824 [Pipeline] sh 00:02:00.109 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:00.120 [Pipeline] sh 00:02:00.403 + cat autorun-spdk.conf 00:02:00.403 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.403 SPDK_RUN_ASAN=1 00:02:00.403 SPDK_RUN_UBSAN=1 00:02:00.403 SPDK_TEST_RAID=1 00:02:00.403 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:00.403 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:00.403 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.410 RUN_NIGHTLY=1 00:02:00.412 [Pipeline] } 00:02:00.425 [Pipeline] // stage 00:02:00.440 [Pipeline] stage 00:02:00.442 [Pipeline] { (Run VM) 00:02:00.455 [Pipeline] sh 00:02:00.740 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:00.740 + echo 'Start stage prepare_nvme.sh' 00:02:00.740 Start stage prepare_nvme.sh 00:02:00.740 + [[ -n 6 ]] 00:02:00.740 + disk_prefix=ex6 00:02:00.740 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:02:00.740 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:02:00.740 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:02:00.740 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.740 ++ SPDK_RUN_ASAN=1 00:02:00.740 ++ SPDK_RUN_UBSAN=1 00:02:00.740 ++ SPDK_TEST_RAID=1 00:02:00.740 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:00.740 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:00.740 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.740 ++ RUN_NIGHTLY=1 00:02:00.740 + cd /var/jenkins/workspace/raid-vg-autotest 00:02:00.740 + nvme_files=() 00:02:00.740 + declare -A nvme_files 00:02:00.740 + backend_dir=/var/lib/libvirt/images/backends 00:02:00.740 + nvme_files['nvme.img']=5G 00:02:00.740 + nvme_files['nvme-cmb.img']=5G 00:02:00.740 + nvme_files['nvme-multi0.img']=4G 00:02:00.740 + nvme_files['nvme-multi1.img']=4G 00:02:00.740 + nvme_files['nvme-multi2.img']=4G 00:02:00.740 + nvme_files['nvme-openstack.img']=8G 00:02:00.740 + nvme_files['nvme-zns.img']=5G 00:02:00.740 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:00.740 + (( SPDK_TEST_FTL == 1 )) 00:02:00.740 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:00.740 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:00.740 + for nvme in "${!nvme_files[@]}" 00:02:00.740 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:02:00.740 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.740 + for nvme in "${!nvme_files[@]}" 00:02:00.740 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:02:00.740 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.740 + for nvme in "${!nvme_files[@]}" 00:02:00.740 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:02:00.740 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:00.740 + for nvme in "${!nvme_files[@]}" 00:02:00.740 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:02:00.741 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.741 + for nvme in "${!nvme_files[@]}" 00:02:00.741 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:02:00.741 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.741 + for nvme in "${!nvme_files[@]}" 00:02:00.741 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:02:00.741 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.741 + for nvme in "${!nvme_files[@]}" 00:02:00.741 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:02:00.741 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:01.000 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:02:01.001 + echo 'End stage prepare_nvme.sh' 00:02:01.001 End stage prepare_nvme.sh 00:02:01.013 [Pipeline] sh 00:02:01.297 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:01.297 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:02:01.297 00:02:01.297 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:02:01.297 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:02:01.297 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:02:01.297 HELP=0 00:02:01.297 DRY_RUN=0 00:02:01.297 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:02:01.297 NVME_DISKS_TYPE=nvme,nvme, 00:02:01.297 NVME_AUTO_CREATE=0 00:02:01.297 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:02:01.297 NVME_CMB=,, 00:02:01.297 NVME_PMR=,, 00:02:01.297 NVME_ZNS=,, 00:02:01.297 NVME_MS=,, 00:02:01.297 NVME_FDP=,, 00:02:01.297 SPDK_VAGRANT_DISTRO=fedora39 00:02:01.297 SPDK_VAGRANT_VMCPU=10 00:02:01.297 SPDK_VAGRANT_VMRAM=12288 00:02:01.297 SPDK_VAGRANT_PROVIDER=libvirt 00:02:01.297 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:01.297 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:01.297 SPDK_OPENSTACK_NETWORK=0 00:02:01.297 VAGRANT_PACKAGE_BOX=0 00:02:01.297 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:01.297 FORCE_DISTRO=true 00:02:01.297 VAGRANT_BOX_VERSION= 00:02:01.297 EXTRA_VAGRANTFILES= 00:02:01.297 NIC_MODEL=virtio 00:02:01.297 00:02:01.297 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:02:01.297 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:02:03.206 Bringing machine 'default' up with 'libvirt' provider... 00:02:03.776 ==> default: Creating image (snapshot of base box volume). 00:02:03.776 ==> default: Creating domain with the following settings... 00:02:03.776 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728785782_e5be1e54275b9b1c22ad 00:02:03.776 ==> default: -- Domain type: kvm 00:02:03.776 ==> default: -- Cpus: 10 00:02:03.776 ==> default: -- Feature: acpi 00:02:03.776 ==> default: -- Feature: apic 00:02:03.776 ==> default: -- Feature: pae 00:02:03.776 ==> default: -- Memory: 12288M 00:02:03.776 ==> default: -- Memory Backing: hugepages: 00:02:03.776 ==> default: -- Management MAC: 00:02:03.776 ==> default: -- Loader: 00:02:03.776 ==> default: -- Nvram: 00:02:03.776 ==> default: -- Base box: spdk/fedora39 00:02:03.776 ==> default: -- Storage pool: default 00:02:03.776 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728785782_e5be1e54275b9b1c22ad.img (20G) 00:02:03.776 ==> default: -- Volume Cache: default 00:02:03.776 ==> default: -- Kernel: 00:02:03.776 ==> default: -- Initrd: 00:02:03.776 ==> default: -- Graphics Type: vnc 00:02:03.776 ==> default: -- Graphics Port: -1 00:02:03.776 ==> default: -- Graphics IP: 127.0.0.1 00:02:03.776 ==> default: -- Graphics Password: Not defined 00:02:03.776 ==> default: -- Video Type: cirrus 00:02:03.776 ==> default: -- Video VRAM: 9216 00:02:03.776 ==> default: -- Sound Type: 00:02:03.776 ==> default: -- Keymap: en-us 00:02:03.776 ==> default: -- TPM Path: 00:02:03.776 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:03.776 ==> default: -- Command line args: 00:02:03.776 ==> default: -> value=-device, 00:02:03.776 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:03.776 ==> default: -> value=-drive, 00:02:03.776 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:02:03.776 ==> default: -> value=-device, 00:02:03.776 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:03.776 ==> default: -> value=-device, 00:02:03.776 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:03.776 ==> default: -> value=-drive, 00:02:03.776 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:03.776 ==> default: -> value=-device, 00:02:03.776 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:03.776 ==> default: -> value=-drive, 00:02:03.776 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:03.776 ==> default: -> value=-device, 00:02:03.776 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:03.776 ==> default: -> value=-drive, 00:02:03.776 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:03.776 ==> default: -> value=-device, 00:02:03.776 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:03.776 ==> default: Creating shared folders metadata... 00:02:04.037 ==> default: Starting domain. 00:02:05.417 ==> default: Waiting for domain to get an IP address... 00:02:23.520 ==> default: Waiting for SSH to become available... 00:02:23.520 ==> default: Configuring and enabling network interfaces... 00:02:27.736 default: SSH address: 192.168.121.97:22 00:02:27.736 default: SSH username: vagrant 00:02:27.736 default: SSH auth method: private key 00:02:31.034 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:39.165 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:44.439 ==> default: Mounting SSHFS shared folder... 00:02:46.972 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:46.972 ==> default: Checking Mount.. 00:02:48.876 ==> default: Folder Successfully Mounted! 00:02:48.876 ==> default: Running provisioner: file... 00:02:49.816 default: ~/.gitconfig => .gitconfig 00:02:50.387 00:02:50.387 SUCCESS! 00:02:50.387 00:02:50.387 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:50.387 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:50.387 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:50.387 00:02:50.396 [Pipeline] } 00:02:50.411 [Pipeline] // stage 00:02:50.420 [Pipeline] dir 00:02:50.421 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:50.422 [Pipeline] { 00:02:50.435 [Pipeline] catchError 00:02:50.436 [Pipeline] { 00:02:50.449 [Pipeline] sh 00:02:50.730 + vagrant ssh-config --host vagrant 00:02:50.730 + sed -ne /^Host/,$p 00:02:50.730 + tee ssh_conf 00:02:53.261 Host vagrant 00:02:53.261 HostName 192.168.121.97 00:02:53.261 User vagrant 00:02:53.261 Port 22 00:02:53.261 UserKnownHostsFile /dev/null 00:02:53.261 StrictHostKeyChecking no 00:02:53.261 PasswordAuthentication no 00:02:53.261 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:53.261 IdentitiesOnly yes 00:02:53.261 LogLevel FATAL 00:02:53.261 ForwardAgent yes 00:02:53.261 ForwardX11 yes 00:02:53.261 00:02:53.275 [Pipeline] withEnv 00:02:53.277 [Pipeline] { 00:02:53.293 [Pipeline] sh 00:02:53.574 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:53.574 source /etc/os-release 00:02:53.574 [[ -e /image.version ]] && img=$(< /image.version) 00:02:53.574 # Minimal, systemd-like check. 00:02:53.574 if [[ -e /.dockerenv ]]; then 00:02:53.574 # Clear garbage from the node's name: 00:02:53.574 # agt-er_autotest_547-896 -> autotest_547-896 00:02:53.574 # $HOSTNAME is the actual container id 00:02:53.574 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:53.574 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:53.574 # We can assume this is a mount from a host where container is running, 00:02:53.574 # so fetch its hostname to easily identify the target swarm worker. 00:02:53.574 container="$(< /etc/hostname) ($agent)" 00:02:53.574 else 00:02:53.574 # Fallback 00:02:53.574 container=$agent 00:02:53.574 fi 00:02:53.574 fi 00:02:53.574 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:53.574 00:02:53.844 [Pipeline] } 00:02:53.860 [Pipeline] // withEnv 00:02:53.869 [Pipeline] setCustomBuildProperty 00:02:53.883 [Pipeline] stage 00:02:53.886 [Pipeline] { (Tests) 00:02:53.903 [Pipeline] sh 00:02:54.186 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:54.459 [Pipeline] sh 00:02:54.743 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:55.017 [Pipeline] timeout 00:02:55.017 Timeout set to expire in 1 hr 30 min 00:02:55.019 [Pipeline] { 00:02:55.033 [Pipeline] sh 00:02:55.318 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:55.888 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:55.901 [Pipeline] sh 00:02:56.187 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:56.459 [Pipeline] sh 00:02:56.743 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:57.019 [Pipeline] sh 00:02:57.303 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:57.563 ++ readlink -f spdk_repo 00:02:57.563 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:57.563 + [[ -n /home/vagrant/spdk_repo ]] 00:02:57.563 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:57.563 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:57.563 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:57.563 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:57.563 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:57.563 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:57.563 + cd /home/vagrant/spdk_repo 00:02:57.563 + source /etc/os-release 00:02:57.563 ++ NAME='Fedora Linux' 00:02:57.563 ++ VERSION='39 (Cloud Edition)' 00:02:57.563 ++ ID=fedora 00:02:57.563 ++ VERSION_ID=39 00:02:57.563 ++ VERSION_CODENAME= 00:02:57.563 ++ PLATFORM_ID=platform:f39 00:02:57.563 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:57.563 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:57.563 ++ LOGO=fedora-logo-icon 00:02:57.563 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:57.563 ++ HOME_URL=https://fedoraproject.org/ 00:02:57.564 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:57.564 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:57.564 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:57.564 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:57.564 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:57.564 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:57.564 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:57.564 ++ SUPPORT_END=2024-11-12 00:02:57.564 ++ VARIANT='Cloud Edition' 00:02:57.564 ++ VARIANT_ID=cloud 00:02:57.564 + uname -a 00:02:57.564 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:57.564 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:58.133 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:58.133 Hugepages 00:02:58.133 node hugesize free / total 00:02:58.133 node0 1048576kB 0 / 0 00:02:58.133 node0 2048kB 0 / 0 00:02:58.133 00:02:58.133 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:58.133 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:58.133 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:58.133 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:58.133 + rm -f /tmp/spdk-ld-path 00:02:58.133 + source autorun-spdk.conf 00:02:58.133 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:58.133 ++ SPDK_RUN_ASAN=1 00:02:58.133 ++ SPDK_RUN_UBSAN=1 00:02:58.133 ++ SPDK_TEST_RAID=1 00:02:58.133 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:58.133 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:58.133 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:58.133 ++ RUN_NIGHTLY=1 00:02:58.133 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:58.133 + [[ -n '' ]] 00:02:58.133 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:58.133 + for M in /var/spdk/build-*-manifest.txt 00:02:58.133 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:58.133 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:58.133 + for M in /var/spdk/build-*-manifest.txt 00:02:58.133 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:58.133 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:58.133 + for M in /var/spdk/build-*-manifest.txt 00:02:58.133 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:58.133 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:58.133 ++ uname 00:02:58.394 + [[ Linux == \L\i\n\u\x ]] 00:02:58.394 + sudo dmesg -T 00:02:58.394 + sudo dmesg --clear 00:02:58.394 + dmesg_pid=6161 00:02:58.394 + [[ Fedora Linux == FreeBSD ]] 00:02:58.394 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:58.394 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:58.394 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:58.394 + [[ -x /usr/src/fio-static/fio ]] 00:02:58.394 + sudo dmesg -Tw 00:02:58.394 + export FIO_BIN=/usr/src/fio-static/fio 00:02:58.394 + FIO_BIN=/usr/src/fio-static/fio 00:02:58.394 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:58.394 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:58.394 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:58.394 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:58.394 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:58.394 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:58.394 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:58.394 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:58.394 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:58.394 Test configuration: 00:02:58.394 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:58.394 SPDK_RUN_ASAN=1 00:02:58.394 SPDK_RUN_UBSAN=1 00:02:58.394 SPDK_TEST_RAID=1 00:02:58.394 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:58.394 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:58.394 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:58.394 RUN_NIGHTLY=1 02:17:16 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:58.394 02:17:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:58.394 02:17:16 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:58.394 02:17:16 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:58.394 02:17:16 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:58.394 02:17:16 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:58.394 02:17:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.394 02:17:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.394 02:17:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.394 02:17:16 -- paths/export.sh@5 -- $ export PATH 00:02:58.394 02:17:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.394 02:17:16 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:58.394 02:17:16 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:58.394 02:17:16 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1728785836.XXXXXX 00:02:58.394 02:17:16 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1728785836.fCZkRo 00:02:58.394 02:17:16 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:58.394 02:17:16 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:02:58.394 02:17:16 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:58.394 02:17:17 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:58.394 02:17:17 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:58.394 02:17:17 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:58.394 02:17:17 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:58.394 02:17:17 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:58.394 02:17:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.394 02:17:17 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:58.394 02:17:17 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:58.394 02:17:17 -- pm/common@17 -- $ local monitor 00:02:58.394 02:17:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.394 02:17:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.394 02:17:17 -- pm/common@25 -- $ sleep 1 00:02:58.394 02:17:17 -- pm/common@21 -- $ date +%s 00:02:58.394 02:17:17 -- pm/common@21 -- $ date +%s 00:02:58.394 02:17:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728785837 00:02:58.394 02:17:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728785837 00:02:58.394 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728785837_collect-vmstat.pm.log 00:02:58.394 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728785837_collect-cpu-load.pm.log 00:02:59.401 02:17:18 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:59.401 02:17:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:59.401 02:17:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:59.401 02:17:18 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:59.401 02:17:18 -- spdk/autobuild.sh@16 -- $ date -u 00:02:59.401 Sun Oct 13 02:17:18 AM UTC 2024 00:02:59.401 02:17:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:59.401 v24.09-rc1-9-gb18e1bd62 00:02:59.401 02:17:18 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:59.401 02:17:18 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:59.401 02:17:18 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:59.401 02:17:18 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:59.401 02:17:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.401 ************************************ 00:02:59.401 START TEST asan 00:02:59.401 ************************************ 00:02:59.401 using asan 00:02:59.401 02:17:18 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:59.401 00:02:59.401 real 0m0.000s 00:02:59.401 user 0m0.000s 00:02:59.401 sys 0m0.000s 00:02:59.401 02:17:18 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:59.401 ************************************ 00:02:59.401 END TEST asan 00:02:59.401 ************************************ 00:02:59.401 02:17:18 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:59.661 02:17:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:59.661 02:17:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:59.661 02:17:18 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:59.661 02:17:18 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:59.661 02:17:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.661 ************************************ 00:02:59.661 START TEST ubsan 00:02:59.661 ************************************ 00:02:59.661 using ubsan 00:02:59.661 02:17:18 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:59.661 00:02:59.661 real 0m0.000s 00:02:59.661 user 0m0.000s 00:02:59.661 sys 0m0.000s 00:02:59.661 02:17:18 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:59.661 02:17:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:59.661 ************************************ 00:02:59.661 END TEST ubsan 00:02:59.661 ************************************ 00:02:59.661 02:17:18 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:59.661 02:17:18 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:59.661 02:17:18 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:59.661 02:17:18 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:59.661 02:17:18 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:59.661 02:17:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.661 ************************************ 00:02:59.661 START TEST build_native_dpdk 00:02:59.661 ************************************ 00:02:59.661 02:17:18 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:59.661 02:17:18 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:59.661 02:17:18 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:59.661 02:17:18 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:59.661 02:17:18 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:59.661 02:17:18 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:59.661 02:17:18 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:59.661 02:17:18 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:59.661 02:17:18 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:59.661 02:17:18 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:59.661 02:17:18 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:59.661 02:17:18 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:59.661 02:17:18 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:59.661 02:17:18 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:59.661 02:17:18 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:59.661 02:17:18 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:59.661 02:17:18 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:59.661 02:17:18 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:59.662 caf0f5d395 version: 22.11.4 00:02:59.662 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:59.662 dc9c799c7d vhost: fix missing spinlock unlock 00:02:59.662 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:59.662 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:59.662 patching file config/rte_config.h 00:02:59.662 Hunk #1 succeeded at 60 (offset 1 line). 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:59.662 patching file lib/pcapng/rte_pcapng.c 00:02:59.662 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:59.662 02:17:18 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:59.662 02:17:18 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:59.663 02:17:18 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:59.663 02:17:18 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:06.267 The Meson build system 00:03:06.267 Version: 1.5.0 00:03:06.267 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:06.267 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:06.267 Build type: native build 00:03:06.267 Program cat found: YES (/usr/bin/cat) 00:03:06.267 Project name: DPDK 00:03:06.267 Project version: 22.11.4 00:03:06.267 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:06.267 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:06.267 Host machine cpu family: x86_64 00:03:06.267 Host machine cpu: x86_64 00:03:06.267 Message: ## Building in Developer Mode ## 00:03:06.267 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:06.267 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:06.267 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:06.267 Program objdump found: YES (/usr/bin/objdump) 00:03:06.267 Program python3 found: YES (/usr/bin/python3) 00:03:06.267 Program cat found: YES (/usr/bin/cat) 00:03:06.267 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:06.267 Checking for size of "void *" : 8 00:03:06.267 Checking for size of "void *" : 8 (cached) 00:03:06.267 Library m found: YES 00:03:06.267 Library numa found: YES 00:03:06.267 Has header "numaif.h" : YES 00:03:06.267 Library fdt found: NO 00:03:06.267 Library execinfo found: NO 00:03:06.267 Has header "execinfo.h" : YES 00:03:06.267 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:06.267 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:06.267 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:06.267 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:06.267 Run-time dependency openssl found: YES 3.1.1 00:03:06.267 Run-time dependency libpcap found: YES 1.10.4 00:03:06.267 Has header "pcap.h" with dependency libpcap: YES 00:03:06.267 Compiler for C supports arguments -Wcast-qual: YES 00:03:06.267 Compiler for C supports arguments -Wdeprecated: YES 00:03:06.267 Compiler for C supports arguments -Wformat: YES 00:03:06.267 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:06.267 Compiler for C supports arguments -Wformat-security: NO 00:03:06.267 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:06.267 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:06.267 Compiler for C supports arguments -Wnested-externs: YES 00:03:06.267 Compiler for C supports arguments -Wold-style-definition: YES 00:03:06.267 Compiler for C supports arguments -Wpointer-arith: YES 00:03:06.267 Compiler for C supports arguments -Wsign-compare: YES 00:03:06.267 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:06.267 Compiler for C supports arguments -Wundef: YES 00:03:06.267 Compiler for C supports arguments -Wwrite-strings: YES 00:03:06.267 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:06.267 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:06.267 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:06.267 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:06.267 Compiler for C supports arguments -mavx512f: YES 00:03:06.267 Checking if "AVX512 checking" compiles: YES 00:03:06.267 Fetching value of define "__SSE4_2__" : 1 00:03:06.267 Fetching value of define "__AES__" : 1 00:03:06.267 Fetching value of define "__AVX__" : 1 00:03:06.267 Fetching value of define "__AVX2__" : 1 00:03:06.267 Fetching value of define "__AVX512BW__" : 1 00:03:06.267 Fetching value of define "__AVX512CD__" : 1 00:03:06.267 Fetching value of define "__AVX512DQ__" : 1 00:03:06.267 Fetching value of define "__AVX512F__" : 1 00:03:06.267 Fetching value of define "__AVX512VL__" : 1 00:03:06.267 Fetching value of define "__PCLMUL__" : 1 00:03:06.267 Fetching value of define "__RDRND__" : 1 00:03:06.267 Fetching value of define "__RDSEED__" : 1 00:03:06.267 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:06.267 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:06.267 Message: lib/kvargs: Defining dependency "kvargs" 00:03:06.267 Message: lib/telemetry: Defining dependency "telemetry" 00:03:06.267 Checking for function "getentropy" : YES 00:03:06.267 Message: lib/eal: Defining dependency "eal" 00:03:06.267 Message: lib/ring: Defining dependency "ring" 00:03:06.267 Message: lib/rcu: Defining dependency "rcu" 00:03:06.267 Message: lib/mempool: Defining dependency "mempool" 00:03:06.267 Message: lib/mbuf: Defining dependency "mbuf" 00:03:06.267 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:06.267 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:06.267 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:06.267 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:06.267 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:06.267 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:06.267 Compiler for C supports arguments -mpclmul: YES 00:03:06.267 Compiler for C supports arguments -maes: YES 00:03:06.267 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:06.267 Compiler for C supports arguments -mavx512bw: YES 00:03:06.267 Compiler for C supports arguments -mavx512dq: YES 00:03:06.267 Compiler for C supports arguments -mavx512vl: YES 00:03:06.267 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:06.267 Compiler for C supports arguments -mavx2: YES 00:03:06.267 Compiler for C supports arguments -mavx: YES 00:03:06.267 Message: lib/net: Defining dependency "net" 00:03:06.267 Message: lib/meter: Defining dependency "meter" 00:03:06.267 Message: lib/ethdev: Defining dependency "ethdev" 00:03:06.267 Message: lib/pci: Defining dependency "pci" 00:03:06.267 Message: lib/cmdline: Defining dependency "cmdline" 00:03:06.267 Message: lib/metrics: Defining dependency "metrics" 00:03:06.267 Message: lib/hash: Defining dependency "hash" 00:03:06.267 Message: lib/timer: Defining dependency "timer" 00:03:06.267 Fetching value of define "__AVX2__" : 1 (cached) 00:03:06.267 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:06.267 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:06.267 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:06.267 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:06.267 Message: lib/acl: Defining dependency "acl" 00:03:06.267 Message: lib/bbdev: Defining dependency "bbdev" 00:03:06.267 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:06.267 Run-time dependency libelf found: YES 0.191 00:03:06.267 Message: lib/bpf: Defining dependency "bpf" 00:03:06.267 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:06.267 Message: lib/compressdev: Defining dependency "compressdev" 00:03:06.267 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:06.267 Message: lib/distributor: Defining dependency "distributor" 00:03:06.267 Message: lib/efd: Defining dependency "efd" 00:03:06.267 Message: lib/eventdev: Defining dependency "eventdev" 00:03:06.267 Message: lib/gpudev: Defining dependency "gpudev" 00:03:06.267 Message: lib/gro: Defining dependency "gro" 00:03:06.267 Message: lib/gso: Defining dependency "gso" 00:03:06.267 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:06.267 Message: lib/jobstats: Defining dependency "jobstats" 00:03:06.267 Message: lib/latencystats: Defining dependency "latencystats" 00:03:06.267 Message: lib/lpm: Defining dependency "lpm" 00:03:06.267 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:06.267 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:06.267 Fetching value of define "__AVX512IFMA__" : (undefined) 00:03:06.267 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:03:06.267 Message: lib/member: Defining dependency "member" 00:03:06.267 Message: lib/pcapng: Defining dependency "pcapng" 00:03:06.267 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:06.267 Message: lib/power: Defining dependency "power" 00:03:06.267 Message: lib/rawdev: Defining dependency "rawdev" 00:03:06.267 Message: lib/regexdev: Defining dependency "regexdev" 00:03:06.267 Message: lib/dmadev: Defining dependency "dmadev" 00:03:06.267 Message: lib/rib: Defining dependency "rib" 00:03:06.267 Message: lib/reorder: Defining dependency "reorder" 00:03:06.267 Message: lib/sched: Defining dependency "sched" 00:03:06.267 Message: lib/security: Defining dependency "security" 00:03:06.267 Message: lib/stack: Defining dependency "stack" 00:03:06.267 Has header "linux/userfaultfd.h" : YES 00:03:06.267 Message: lib/vhost: Defining dependency "vhost" 00:03:06.267 Message: lib/ipsec: Defining dependency "ipsec" 00:03:06.268 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:06.268 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:06.268 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:06.268 Message: lib/fib: Defining dependency "fib" 00:03:06.268 Message: lib/port: Defining dependency "port" 00:03:06.268 Message: lib/pdump: Defining dependency "pdump" 00:03:06.268 Message: lib/table: Defining dependency "table" 00:03:06.268 Message: lib/pipeline: Defining dependency "pipeline" 00:03:06.268 Message: lib/graph: Defining dependency "graph" 00:03:06.268 Message: lib/node: Defining dependency "node" 00:03:06.268 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:06.268 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:06.268 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:06.268 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:06.268 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:06.268 Compiler for C supports arguments -Wno-unused-value: YES 00:03:06.268 Compiler for C supports arguments -Wno-format: YES 00:03:06.268 Compiler for C supports arguments -Wno-format-security: YES 00:03:06.268 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:06.268 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:06.836 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:06.836 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:06.836 Fetching value of define "__AVX2__" : 1 (cached) 00:03:06.836 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:06.836 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:06.836 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:06.836 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:06.836 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:06.836 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:06.836 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:06.836 Configuring doxy-api.conf using configuration 00:03:06.836 Program sphinx-build found: NO 00:03:06.836 Configuring rte_build_config.h using configuration 00:03:06.836 Message: 00:03:06.836 ================= 00:03:06.836 Applications Enabled 00:03:06.836 ================= 00:03:06.836 00:03:06.836 apps: 00:03:06.836 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:03:06.836 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:03:06.836 test-security-perf, 00:03:06.836 00:03:06.836 Message: 00:03:06.836 ================= 00:03:06.836 Libraries Enabled 00:03:06.836 ================= 00:03:06.836 00:03:06.836 libs: 00:03:06.836 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:03:06.836 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:03:06.836 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:03:06.836 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:03:06.836 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:03:06.836 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:03:06.836 table, pipeline, graph, node, 00:03:06.836 00:03:06.836 Message: 00:03:06.836 =============== 00:03:06.836 Drivers Enabled 00:03:06.836 =============== 00:03:06.836 00:03:06.836 common: 00:03:06.836 00:03:06.836 bus: 00:03:06.836 pci, vdev, 00:03:06.836 mempool: 00:03:06.836 ring, 00:03:06.836 dma: 00:03:06.836 00:03:06.836 net: 00:03:06.836 i40e, 00:03:06.836 raw: 00:03:06.836 00:03:06.836 crypto: 00:03:06.836 00:03:06.836 compress: 00:03:06.836 00:03:06.836 regex: 00:03:06.836 00:03:06.836 vdpa: 00:03:06.836 00:03:06.836 event: 00:03:06.836 00:03:06.836 baseband: 00:03:06.836 00:03:06.836 gpu: 00:03:06.836 00:03:06.836 00:03:06.836 Message: 00:03:06.836 ================= 00:03:06.836 Content Skipped 00:03:06.836 ================= 00:03:06.836 00:03:06.836 apps: 00:03:06.836 00:03:06.836 libs: 00:03:06.836 kni: explicitly disabled via build config (deprecated lib) 00:03:06.836 flow_classify: explicitly disabled via build config (deprecated lib) 00:03:06.836 00:03:06.836 drivers: 00:03:06.836 common/cpt: not in enabled drivers build config 00:03:06.837 common/dpaax: not in enabled drivers build config 00:03:06.837 common/iavf: not in enabled drivers build config 00:03:06.837 common/idpf: not in enabled drivers build config 00:03:06.837 common/mvep: not in enabled drivers build config 00:03:06.837 common/octeontx: not in enabled drivers build config 00:03:06.837 bus/auxiliary: not in enabled drivers build config 00:03:06.837 bus/dpaa: not in enabled drivers build config 00:03:06.837 bus/fslmc: not in enabled drivers build config 00:03:06.837 bus/ifpga: not in enabled drivers build config 00:03:06.837 bus/vmbus: not in enabled drivers build config 00:03:06.837 common/cnxk: not in enabled drivers build config 00:03:06.837 common/mlx5: not in enabled drivers build config 00:03:06.837 common/qat: not in enabled drivers build config 00:03:06.837 common/sfc_efx: not in enabled drivers build config 00:03:06.837 mempool/bucket: not in enabled drivers build config 00:03:06.837 mempool/cnxk: not in enabled drivers build config 00:03:06.837 mempool/dpaa: not in enabled drivers build config 00:03:06.837 mempool/dpaa2: not in enabled drivers build config 00:03:06.837 mempool/octeontx: not in enabled drivers build config 00:03:06.837 mempool/stack: not in enabled drivers build config 00:03:06.837 dma/cnxk: not in enabled drivers build config 00:03:06.837 dma/dpaa: not in enabled drivers build config 00:03:06.837 dma/dpaa2: not in enabled drivers build config 00:03:06.837 dma/hisilicon: not in enabled drivers build config 00:03:06.837 dma/idxd: not in enabled drivers build config 00:03:06.837 dma/ioat: not in enabled drivers build config 00:03:06.837 dma/skeleton: not in enabled drivers build config 00:03:06.837 net/af_packet: not in enabled drivers build config 00:03:06.837 net/af_xdp: not in enabled drivers build config 00:03:06.837 net/ark: not in enabled drivers build config 00:03:06.837 net/atlantic: not in enabled drivers build config 00:03:06.837 net/avp: not in enabled drivers build config 00:03:06.837 net/axgbe: not in enabled drivers build config 00:03:06.837 net/bnx2x: not in enabled drivers build config 00:03:06.837 net/bnxt: not in enabled drivers build config 00:03:06.837 net/bonding: not in enabled drivers build config 00:03:06.837 net/cnxk: not in enabled drivers build config 00:03:06.837 net/cxgbe: not in enabled drivers build config 00:03:06.837 net/dpaa: not in enabled drivers build config 00:03:06.837 net/dpaa2: not in enabled drivers build config 00:03:06.837 net/e1000: not in enabled drivers build config 00:03:06.837 net/ena: not in enabled drivers build config 00:03:06.837 net/enetc: not in enabled drivers build config 00:03:06.837 net/enetfec: not in enabled drivers build config 00:03:06.837 net/enic: not in enabled drivers build config 00:03:06.837 net/failsafe: not in enabled drivers build config 00:03:06.837 net/fm10k: not in enabled drivers build config 00:03:06.837 net/gve: not in enabled drivers build config 00:03:06.837 net/hinic: not in enabled drivers build config 00:03:06.837 net/hns3: not in enabled drivers build config 00:03:06.837 net/iavf: not in enabled drivers build config 00:03:06.837 net/ice: not in enabled drivers build config 00:03:06.837 net/idpf: not in enabled drivers build config 00:03:06.837 net/igc: not in enabled drivers build config 00:03:06.837 net/ionic: not in enabled drivers build config 00:03:06.837 net/ipn3ke: not in enabled drivers build config 00:03:06.837 net/ixgbe: not in enabled drivers build config 00:03:06.837 net/kni: not in enabled drivers build config 00:03:06.837 net/liquidio: not in enabled drivers build config 00:03:06.837 net/mana: not in enabled drivers build config 00:03:06.837 net/memif: not in enabled drivers build config 00:03:06.837 net/mlx4: not in enabled drivers build config 00:03:06.837 net/mlx5: not in enabled drivers build config 00:03:06.837 net/mvneta: not in enabled drivers build config 00:03:06.837 net/mvpp2: not in enabled drivers build config 00:03:06.837 net/netvsc: not in enabled drivers build config 00:03:06.837 net/nfb: not in enabled drivers build config 00:03:06.837 net/nfp: not in enabled drivers build config 00:03:06.837 net/ngbe: not in enabled drivers build config 00:03:06.837 net/null: not in enabled drivers build config 00:03:06.837 net/octeontx: not in enabled drivers build config 00:03:06.837 net/octeon_ep: not in enabled drivers build config 00:03:06.837 net/pcap: not in enabled drivers build config 00:03:06.837 net/pfe: not in enabled drivers build config 00:03:06.837 net/qede: not in enabled drivers build config 00:03:06.837 net/ring: not in enabled drivers build config 00:03:06.837 net/sfc: not in enabled drivers build config 00:03:06.837 net/softnic: not in enabled drivers build config 00:03:06.837 net/tap: not in enabled drivers build config 00:03:06.837 net/thunderx: not in enabled drivers build config 00:03:06.837 net/txgbe: not in enabled drivers build config 00:03:06.837 net/vdev_netvsc: not in enabled drivers build config 00:03:06.837 net/vhost: not in enabled drivers build config 00:03:06.837 net/virtio: not in enabled drivers build config 00:03:06.837 net/vmxnet3: not in enabled drivers build config 00:03:06.837 raw/cnxk_bphy: not in enabled drivers build config 00:03:06.837 raw/cnxk_gpio: not in enabled drivers build config 00:03:06.837 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:06.837 raw/ifpga: not in enabled drivers build config 00:03:06.837 raw/ntb: not in enabled drivers build config 00:03:06.837 raw/skeleton: not in enabled drivers build config 00:03:06.837 crypto/armv8: not in enabled drivers build config 00:03:06.837 crypto/bcmfs: not in enabled drivers build config 00:03:06.837 crypto/caam_jr: not in enabled drivers build config 00:03:06.837 crypto/ccp: not in enabled drivers build config 00:03:06.837 crypto/cnxk: not in enabled drivers build config 00:03:06.837 crypto/dpaa_sec: not in enabled drivers build config 00:03:06.837 crypto/dpaa2_sec: not in enabled drivers build config 00:03:06.837 crypto/ipsec_mb: not in enabled drivers build config 00:03:06.837 crypto/mlx5: not in enabled drivers build config 00:03:06.837 crypto/mvsam: not in enabled drivers build config 00:03:06.837 crypto/nitrox: not in enabled drivers build config 00:03:06.837 crypto/null: not in enabled drivers build config 00:03:06.837 crypto/octeontx: not in enabled drivers build config 00:03:06.837 crypto/openssl: not in enabled drivers build config 00:03:06.837 crypto/scheduler: not in enabled drivers build config 00:03:06.837 crypto/uadk: not in enabled drivers build config 00:03:06.837 crypto/virtio: not in enabled drivers build config 00:03:06.837 compress/isal: not in enabled drivers build config 00:03:06.837 compress/mlx5: not in enabled drivers build config 00:03:06.837 compress/octeontx: not in enabled drivers build config 00:03:06.837 compress/zlib: not in enabled drivers build config 00:03:06.837 regex/mlx5: not in enabled drivers build config 00:03:06.837 regex/cn9k: not in enabled drivers build config 00:03:06.837 vdpa/ifc: not in enabled drivers build config 00:03:06.837 vdpa/mlx5: not in enabled drivers build config 00:03:06.837 vdpa/sfc: not in enabled drivers build config 00:03:06.837 event/cnxk: not in enabled drivers build config 00:03:06.837 event/dlb2: not in enabled drivers build config 00:03:06.837 event/dpaa: not in enabled drivers build config 00:03:06.837 event/dpaa2: not in enabled drivers build config 00:03:06.837 event/dsw: not in enabled drivers build config 00:03:06.837 event/opdl: not in enabled drivers build config 00:03:06.837 event/skeleton: not in enabled drivers build config 00:03:06.837 event/sw: not in enabled drivers build config 00:03:06.837 event/octeontx: not in enabled drivers build config 00:03:06.837 baseband/acc: not in enabled drivers build config 00:03:06.837 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:06.837 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:06.837 baseband/la12xx: not in enabled drivers build config 00:03:06.837 baseband/null: not in enabled drivers build config 00:03:06.837 baseband/turbo_sw: not in enabled drivers build config 00:03:06.837 gpu/cuda: not in enabled drivers build config 00:03:06.837 00:03:06.837 00:03:06.837 Build targets in project: 311 00:03:06.837 00:03:06.837 DPDK 22.11.4 00:03:06.837 00:03:06.837 User defined options 00:03:06.837 libdir : lib 00:03:06.837 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:06.837 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:06.837 c_link_args : 00:03:06.837 enable_docs : false 00:03:06.837 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:06.837 enable_kmods : false 00:03:06.837 machine : native 00:03:06.837 tests : false 00:03:06.837 00:03:06.837 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:06.837 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:07.096 02:17:25 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:07.096 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:07.357 [1/740] Generating lib/rte_kvargs_def with a custom command 00:03:07.357 [2/740] Generating lib/rte_telemetry_mingw with a custom command 00:03:07.357 [3/740] Generating lib/rte_kvargs_mingw with a custom command 00:03:07.357 [4/740] Generating lib/rte_telemetry_def with a custom command 00:03:07.357 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:07.357 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:07.357 [7/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:07.357 [8/740] Linking static target lib/librte_kvargs.a 00:03:07.357 [9/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:07.357 [10/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:07.357 [11/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:07.357 [12/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:07.357 [13/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:07.357 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:07.357 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:07.357 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:07.357 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:07.617 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:07.617 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:07.617 [20/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.617 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:07.617 [22/740] Linking target lib/librte_kvargs.so.23.0 00:03:07.617 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:07.617 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:07.617 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:03:07.617 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:07.617 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:07.617 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:07.878 [29/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:07.878 [30/740] Linking static target lib/librte_telemetry.a 00:03:07.878 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:07.878 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:07.878 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:07.878 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:07.878 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:07.878 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:07.878 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:07.878 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:07.878 [39/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:03:07.878 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:07.878 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:08.138 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:08.138 [43/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.138 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:08.138 [45/740] Linking target lib/librte_telemetry.so.23.0 00:03:08.138 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:08.138 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:08.138 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:08.138 [49/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:08.138 [50/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:08.138 [51/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:03:08.138 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:08.138 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:08.138 [54/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:08.138 [55/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:08.398 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:08.398 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:08.398 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:08.398 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:08.398 [60/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:08.398 [61/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:08.398 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:08.398 [63/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:08.398 [64/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:08.398 [65/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:03:08.398 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:08.398 [67/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:08.398 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:08.398 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:08.398 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:08.398 [71/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:08.398 [72/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:08.658 [73/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:08.658 [74/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:08.658 [75/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:08.658 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:08.658 [77/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:08.658 [78/740] Generating lib/rte_eal_def with a custom command 00:03:08.658 [79/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:08.658 [80/740] Generating lib/rte_eal_mingw with a custom command 00:03:08.658 [81/740] Generating lib/rte_ring_def with a custom command 00:03:08.658 [82/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:08.658 [83/740] Generating lib/rte_rcu_def with a custom command 00:03:08.658 [84/740] Generating lib/rte_ring_mingw with a custom command 00:03:08.658 [85/740] Generating lib/rte_rcu_mingw with a custom command 00:03:08.658 [86/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:08.658 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:08.658 [88/740] Linking static target lib/librte_ring.a 00:03:08.918 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:08.918 [90/740] Generating lib/rte_mempool_def with a custom command 00:03:08.918 [91/740] Generating lib/rte_mempool_mingw with a custom command 00:03:08.918 [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:08.918 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:08.918 [94/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:08.918 [95/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.918 [96/740] Generating lib/rte_mbuf_def with a custom command 00:03:08.918 [97/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:09.179 [98/740] Generating lib/rte_mbuf_mingw with a custom command 00:03:09.179 [99/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:09.179 [100/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:09.179 [101/740] Linking static target lib/librte_eal.a 00:03:09.179 [102/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:09.179 [103/740] Linking static target lib/librte_rcu.a 00:03:09.179 [104/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:09.442 [105/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:09.442 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:09.442 [107/740] Linking static target lib/librte_mempool.a 00:03:09.442 [108/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:09.442 [109/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:09.442 [110/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:09.442 [111/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:09.442 [112/740] Generating lib/rte_net_def with a custom command 00:03:09.442 [113/740] Generating lib/rte_net_mingw with a custom command 00:03:09.442 [114/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.706 [115/740] Generating lib/rte_meter_mingw with a custom command 00:03:09.706 [116/740] Generating lib/rte_meter_def with a custom command 00:03:09.706 [117/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:09.706 [118/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:09.706 [119/740] Linking static target lib/librte_meter.a 00:03:09.706 [120/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:09.706 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:09.965 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:09.965 [123/740] Linking static target lib/librte_net.a 00:03:09.965 [124/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.965 [125/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:10.224 [126/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:10.224 [127/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:10.224 [128/740] Linking static target lib/librte_mbuf.a 00:03:10.224 [129/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:10.224 [130/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.224 [131/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:10.224 [132/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.224 [133/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:10.483 [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:10.483 [135/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:10.483 [136/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.483 [137/740] Generating lib/rte_ethdev_def with a custom command 00:03:10.749 [138/740] Generating lib/rte_ethdev_mingw with a custom command 00:03:10.749 [139/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:10.749 [140/740] Generating lib/rte_pci_def with a custom command 00:03:10.749 [141/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:10.749 [142/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:10.749 [143/740] Linking static target lib/librte_pci.a 00:03:10.749 [144/740] Generating lib/rte_pci_mingw with a custom command 00:03:10.749 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:10.749 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:10.749 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:11.019 [148/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:11.019 [149/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:11.019 [150/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.019 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:11.019 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:11.019 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:11.019 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:11.019 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:11.019 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:11.019 [157/740] Generating lib/rte_cmdline_def with a custom command 00:03:11.019 [158/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:11.019 [159/740] Generating lib/rte_cmdline_mingw with a custom command 00:03:11.019 [160/740] Generating lib/rte_metrics_def with a custom command 00:03:11.019 [161/740] Generating lib/rte_metrics_mingw with a custom command 00:03:11.019 [162/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:11.279 [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:11.279 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:11.279 [165/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:11.279 [166/740] Generating lib/rte_hash_def with a custom command 00:03:11.279 [167/740] Generating lib/rte_hash_mingw with a custom command 00:03:11.279 [168/740] Generating lib/rte_timer_def with a custom command 00:03:11.279 [169/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:11.279 [170/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:11.279 [171/740] Linking static target lib/librte_cmdline.a 00:03:11.279 [172/740] Generating lib/rte_timer_mingw with a custom command 00:03:11.279 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:11.538 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:11.538 [175/740] Linking static target lib/librte_metrics.a 00:03:11.538 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:11.538 [177/740] Linking static target lib/librte_timer.a 00:03:11.798 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.798 [179/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:12.059 [180/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.059 [181/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:12.059 [182/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:12.319 [183/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.319 [184/740] Generating lib/rte_acl_def with a custom command 00:03:12.319 [185/740] Generating lib/rte_acl_mingw with a custom command 00:03:12.319 [186/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:12.319 [187/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:12.319 [188/740] Generating lib/rte_bbdev_def with a custom command 00:03:12.319 [189/740] Generating lib/rte_bbdev_mingw with a custom command 00:03:12.319 [190/740] Generating lib/rte_bitratestats_def with a custom command 00:03:12.319 [191/740] Generating lib/rte_bitratestats_mingw with a custom command 00:03:12.319 [192/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:12.319 [193/740] Linking static target lib/librte_ethdev.a 00:03:12.888 [194/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:12.888 [195/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:12.888 [196/740] Linking static target lib/librte_bitratestats.a 00:03:12.888 [197/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.888 [198/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:13.147 [199/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:13.147 [200/740] Linking static target lib/librte_bbdev.a 00:03:13.147 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:13.407 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:13.667 [203/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.667 [204/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:13.667 [205/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:13.667 [206/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:13.667 [207/740] Linking static target lib/librte_hash.a 00:03:13.926 [208/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:13.926 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:14.186 [210/740] Generating lib/rte_bpf_def with a custom command 00:03:14.186 [211/740] Generating lib/rte_bpf_mingw with a custom command 00:03:14.186 [212/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:14.186 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:03:14.186 [214/740] Generating lib/rte_cfgfile_mingw with a custom command 00:03:14.446 [215/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:14.446 [216/740] Linking static target lib/librte_cfgfile.a 00:03:14.446 [217/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:14.446 [218/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:03:14.446 [219/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.446 [220/740] Generating lib/rte_compressdev_def with a custom command 00:03:14.446 [221/740] Generating lib/rte_compressdev_mingw with a custom command 00:03:14.446 [222/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:14.706 [223/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:14.706 [224/740] Linking static target lib/librte_bpf.a 00:03:14.707 [225/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.707 [226/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:14.707 [227/740] Generating lib/rte_cryptodev_def with a custom command 00:03:14.707 [228/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:14.967 [229/740] Generating lib/rte_cryptodev_mingw with a custom command 00:03:14.967 [230/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:14.967 [231/740] Linking static target lib/librte_acl.a 00:03:14.967 [232/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.967 [233/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:14.967 [234/740] Generating lib/rte_distributor_def with a custom command 00:03:14.967 [235/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:14.967 [236/740] Linking static target lib/librte_compressdev.a 00:03:14.967 [237/740] Generating lib/rte_distributor_mingw with a custom command 00:03:15.227 [238/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:15.227 [239/740] Generating lib/rte_efd_def with a custom command 00:03:15.227 [240/740] Generating lib/rte_efd_mingw with a custom command 00:03:15.227 [241/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.227 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:15.227 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:15.487 [244/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:15.487 [245/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:15.487 [246/740] Linking static target lib/librte_distributor.a 00:03:15.748 [247/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:15.748 [248/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.748 [249/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.008 [250/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.008 [251/740] Linking target lib/librte_eal.so.23.0 00:03:16.008 [252/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:16.008 [253/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:16.268 [254/740] Linking target lib/librte_ring.so.23.0 00:03:16.268 [255/740] Linking target lib/librte_meter.so.23.0 00:03:16.268 [256/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:16.268 [257/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:16.268 [258/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:16.268 [259/740] Linking target lib/librte_pci.so.23.0 00:03:16.268 [260/740] Linking target lib/librte_rcu.so.23.0 00:03:16.268 [261/740] Linking target lib/librte_mempool.so.23.0 00:03:16.528 [262/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:16.528 [263/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:16.528 [264/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:16.528 [265/740] Linking target lib/librte_timer.so.23.0 00:03:16.528 [266/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:16.528 [267/740] Linking target lib/librte_acl.so.23.0 00:03:16.528 [268/740] Linking target lib/librte_mbuf.so.23.0 00:03:16.528 [269/740] Linking target lib/librte_cfgfile.so.23.0 00:03:16.528 [270/740] Linking static target lib/librte_efd.a 00:03:16.528 [271/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:16.528 [272/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:16.528 [273/740] Generating lib/rte_eventdev_def with a custom command 00:03:16.528 [274/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:16.528 [275/740] Generating lib/rte_eventdev_mingw with a custom command 00:03:16.788 [276/740] Linking target lib/librte_bbdev.so.23.0 00:03:16.788 [277/740] Linking target lib/librte_net.so.23.0 00:03:16.788 [278/740] Linking target lib/librte_compressdev.so.23.0 00:03:16.788 [279/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:16.788 [280/740] Linking static target lib/librte_cryptodev.a 00:03:16.788 [281/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:16.788 [282/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.788 [283/740] Linking target lib/librte_distributor.so.23.0 00:03:16.788 [284/740] Generating lib/rte_gpudev_def with a custom command 00:03:16.788 [285/740] Generating lib/rte_gpudev_mingw with a custom command 00:03:16.788 [286/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:16.788 [287/740] Linking target lib/librte_cmdline.so.23.0 00:03:16.788 [288/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:16.788 [289/740] Linking target lib/librte_hash.so.23.0 00:03:16.788 [290/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.048 [291/740] Linking target lib/librte_ethdev.so.23.0 00:03:17.048 [292/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:17.048 [293/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:17.048 [294/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:17.048 [295/740] Linking target lib/librte_efd.so.23.0 00:03:17.048 [296/740] Linking target lib/librte_metrics.so.23.0 00:03:17.048 [297/740] Linking target lib/librte_bpf.so.23.0 00:03:17.308 [298/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:17.308 [299/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:17.308 [300/740] Linking target lib/librte_bitratestats.so.23.0 00:03:17.308 [301/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:17.308 [302/740] Linking static target lib/librte_gpudev.a 00:03:17.308 [303/740] Generating lib/rte_gro_def with a custom command 00:03:17.309 [304/740] Generating lib/rte_gro_mingw with a custom command 00:03:17.309 [305/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:17.309 [306/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:17.309 [307/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:17.569 [308/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:17.569 [309/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:17.569 [310/740] Linking static target lib/librte_gro.a 00:03:17.569 [311/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:17.569 [312/740] Generating lib/rte_gso_def with a custom command 00:03:17.829 [313/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:17.829 [314/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:17.829 [315/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:17.829 [316/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:17.829 [317/740] Generating lib/rte_gso_mingw with a custom command 00:03:17.829 [318/740] Linking static target lib/librte_eventdev.a 00:03:17.829 [319/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.829 [320/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:17.829 [321/740] Linking target lib/librte_gro.so.23.0 00:03:17.829 [322/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:17.829 [323/740] Linking static target lib/librte_gso.a 00:03:18.089 [324/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.089 [325/740] Linking target lib/librte_gpudev.so.23.0 00:03:18.089 [326/740] Generating lib/rte_ip_frag_def with a custom command 00:03:18.089 [327/740] Generating lib/rte_ip_frag_mingw with a custom command 00:03:18.089 [328/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.089 [329/740] Linking target lib/librte_gso.so.23.0 00:03:18.089 [330/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:18.089 [331/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:18.089 [332/740] Generating lib/rte_jobstats_mingw with a custom command 00:03:18.089 [333/740] Generating lib/rte_jobstats_def with a custom command 00:03:18.089 [334/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:18.348 [335/740] Linking static target lib/librte_jobstats.a 00:03:18.348 [336/740] Generating lib/rte_latencystats_def with a custom command 00:03:18.348 [337/740] Generating lib/rte_latencystats_mingw with a custom command 00:03:18.348 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:18.348 [339/740] Generating lib/rte_lpm_def with a custom command 00:03:18.348 [340/740] Generating lib/rte_lpm_mingw with a custom command 00:03:18.349 [341/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:18.349 [342/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:18.610 [343/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:18.610 [344/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.610 [345/740] Linking static target lib/librte_ip_frag.a 00:03:18.610 [346/740] Linking target lib/librte_jobstats.so.23.0 00:03:18.610 [347/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:18.610 [348/740] Linking static target lib/librte_latencystats.a 00:03:18.610 [349/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.877 [350/740] Linking target lib/librte_cryptodev.so.23.0 00:03:18.877 [351/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.877 [352/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:18.877 [353/740] Linking target lib/librte_ip_frag.so.23.0 00:03:18.877 [354/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:18.877 [355/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.877 [356/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:18.877 [357/740] Generating lib/rte_member_def with a custom command 00:03:18.877 [358/740] Linking target lib/librte_latencystats.so.23.0 00:03:18.877 [359/740] Generating lib/rte_member_mingw with a custom command 00:03:18.877 [360/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:18.877 [361/740] Generating lib/rte_pcapng_def with a custom command 00:03:18.877 [362/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:18.877 [363/740] Generating lib/rte_pcapng_mingw with a custom command 00:03:18.877 [364/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:19.137 [365/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:19.137 [366/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:19.137 [367/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:19.137 [368/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:19.137 [369/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:19.137 [370/740] Linking static target lib/librte_lpm.a 00:03:19.399 [371/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:19.399 [372/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:19.399 [373/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:19.399 [374/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.399 [375/740] Generating lib/rte_power_def with a custom command 00:03:19.399 [376/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:19.399 [377/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:19.399 [378/740] Generating lib/rte_power_mingw with a custom command 00:03:19.399 [379/740] Generating lib/rte_rawdev_def with a custom command 00:03:19.399 [380/740] Linking target lib/librte_eventdev.so.23.0 00:03:19.399 [381/740] Generating lib/rte_rawdev_mingw with a custom command 00:03:19.658 [382/740] Generating lib/rte_regexdev_def with a custom command 00:03:19.658 [383/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.658 [384/740] Generating lib/rte_regexdev_mingw with a custom command 00:03:19.658 [385/740] Linking target lib/librte_lpm.so.23.0 00:03:19.658 [386/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:19.658 [387/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:19.658 [388/740] Generating lib/rte_dmadev_def with a custom command 00:03:19.658 [389/740] Linking static target lib/librte_pcapng.a 00:03:19.658 [390/740] Generating lib/rte_dmadev_mingw with a custom command 00:03:19.658 [391/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:19.658 [392/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:19.658 [393/740] Generating lib/rte_rib_def with a custom command 00:03:19.658 [394/740] Generating lib/rte_rib_mingw with a custom command 00:03:19.658 [395/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:19.658 [396/740] Linking static target lib/librte_rawdev.a 00:03:19.918 [397/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:19.918 [398/740] Generating lib/rte_reorder_def with a custom command 00:03:19.918 [399/740] Generating lib/rte_reorder_mingw with a custom command 00:03:19.918 [400/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.918 [401/740] Linking target lib/librte_pcapng.so.23.0 00:03:19.918 [402/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:19.918 [403/740] Linking static target lib/librte_dmadev.a 00:03:19.918 [404/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:19.918 [405/740] Linking static target lib/librte_power.a 00:03:19.918 [406/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:19.918 [407/740] Linking static target lib/librte_regexdev.a 00:03:20.178 [408/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:20.178 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:20.178 [410/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.178 [411/740] Linking target lib/librte_rawdev.so.23.0 00:03:20.178 [412/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:20.178 [413/740] Generating lib/rte_sched_def with a custom command 00:03:20.178 [414/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:20.178 [415/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:20.438 [416/740] Generating lib/rte_sched_mingw with a custom command 00:03:20.438 [417/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:20.438 [418/740] Linking static target lib/librte_member.a 00:03:20.438 [419/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:20.438 [420/740] Generating lib/rte_security_mingw with a custom command 00:03:20.438 [421/740] Generating lib/rte_security_def with a custom command 00:03:20.438 [422/740] Linking static target lib/librte_reorder.a 00:03:20.438 [423/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:20.438 [424/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:20.438 [425/740] Generating lib/rte_stack_def with a custom command 00:03:20.438 [426/740] Generating lib/rte_stack_mingw with a custom command 00:03:20.438 [427/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:20.439 [428/740] Linking static target lib/librte_stack.a 00:03:20.439 [429/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:20.439 [430/740] Linking static target lib/librte_rib.a 00:03:20.439 [431/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.439 [432/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.698 [433/740] Linking target lib/librte_dmadev.so.23.0 00:03:20.698 [434/740] Linking target lib/librte_reorder.so.23.0 00:03:20.698 [435/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.698 [436/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:20.698 [437/740] Linking target lib/librte_member.so.23.0 00:03:20.698 [438/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.698 [439/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:20.698 [440/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.698 [441/740] Linking target lib/librte_stack.so.23.0 00:03:20.698 [442/740] Linking target lib/librte_regexdev.so.23.0 00:03:20.957 [443/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.957 [444/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:20.957 [445/740] Linking static target lib/librte_security.a 00:03:20.957 [446/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.957 [447/740] Linking target lib/librte_power.so.23.0 00:03:20.957 [448/740] Linking target lib/librte_rib.so.23.0 00:03:20.957 [449/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:20.957 [450/740] Generating lib/rte_vhost_def with a custom command 00:03:21.217 [451/740] Generating lib/rte_vhost_mingw with a custom command 00:03:21.217 [452/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:21.217 [453/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:21.217 [454/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.217 [455/740] Linking target lib/librte_security.so.23.0 00:03:21.217 [456/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:21.476 [457/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:21.476 [458/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:21.476 [459/740] Linking static target lib/librte_sched.a 00:03:21.735 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:21.735 [461/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:21.735 [462/740] Generating lib/rte_ipsec_def with a custom command 00:03:21.735 [463/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.735 [464/740] Generating lib/rte_ipsec_mingw with a custom command 00:03:21.735 [465/740] Linking target lib/librte_sched.so.23.0 00:03:21.735 [466/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:21.995 [467/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:21.995 [468/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:21.995 [469/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:21.995 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:21.995 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:21.995 [472/740] Generating lib/rte_fib_def with a custom command 00:03:22.255 [473/740] Generating lib/rte_fib_mingw with a custom command 00:03:22.255 [474/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:22.514 [475/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:22.514 [476/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:22.514 [477/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:22.514 [478/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:22.774 [479/740] Linking static target lib/librte_ipsec.a 00:03:22.774 [480/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:22.774 [481/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:22.774 [482/740] Linking static target lib/librte_fib.a 00:03:22.774 [483/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:22.774 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:23.033 [485/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.033 [486/740] Linking target lib/librte_ipsec.so.23.0 00:03:23.033 [487/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:23.033 [488/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:23.293 [489/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.293 [490/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:23.293 [491/740] Linking target lib/librte_fib.so.23.0 00:03:23.553 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:23.813 [493/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:23.813 [494/740] Generating lib/rte_port_def with a custom command 00:03:23.813 [495/740] Generating lib/rte_port_mingw with a custom command 00:03:23.813 [496/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:23.813 [497/740] Generating lib/rte_pdump_def with a custom command 00:03:23.813 [498/740] Generating lib/rte_pdump_mingw with a custom command 00:03:23.813 [499/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:23.813 [500/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:23.813 [501/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:23.813 [502/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:24.073 [503/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:24.073 [504/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:24.073 [505/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:24.332 [506/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:24.332 [507/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:24.332 [508/740] Linking static target lib/librte_port.a 00:03:24.332 [509/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:24.332 [510/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:24.332 [511/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:24.332 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:24.591 [513/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:24.591 [514/740] Linking static target lib/librte_pdump.a 00:03:24.591 [515/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.851 [516/740] Linking target lib/librte_port.so.23.0 00:03:24.851 [517/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.851 [518/740] Linking target lib/librte_pdump.so.23.0 00:03:24.851 [519/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:24.851 [520/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:24.851 [521/740] Generating lib/rte_table_def with a custom command 00:03:24.851 [522/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:24.851 [523/740] Generating lib/rte_table_mingw with a custom command 00:03:25.111 [524/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:25.111 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:25.111 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:25.111 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:25.371 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:25.371 [529/740] Generating lib/rte_pipeline_def with a custom command 00:03:25.371 [530/740] Generating lib/rte_pipeline_mingw with a custom command 00:03:25.371 [531/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:25.371 [532/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:25.371 [533/740] Linking static target lib/librte_table.a 00:03:25.631 [534/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:25.890 [535/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:25.890 [536/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:25.890 [537/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.890 [538/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:25.890 [539/740] Linking target lib/librte_table.so.23.0 00:03:26.151 [540/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:26.151 [541/740] Generating lib/rte_graph_def with a custom command 00:03:26.151 [542/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:26.151 [543/740] Generating lib/rte_graph_mingw with a custom command 00:03:26.410 [544/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:26.410 [545/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:26.410 [546/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:26.410 [547/740] Linking static target lib/librte_graph.a 00:03:26.410 [548/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:26.670 [549/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:26.670 [550/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:26.670 [551/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:26.931 [552/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:26.931 [553/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.931 [554/740] Linking target lib/librte_graph.so.23.0 00:03:26.931 [555/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:26.931 [556/740] Generating lib/rte_node_def with a custom command 00:03:27.191 [557/740] Generating lib/rte_node_mingw with a custom command 00:03:27.191 [558/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:27.191 [559/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:27.191 [560/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:27.191 [561/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:27.191 [562/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:27.191 [563/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:27.191 [564/740] Generating drivers/rte_bus_pci_def with a custom command 00:03:27.191 [565/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:27.454 [566/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:27.454 [567/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:27.454 [568/740] Generating drivers/rte_bus_vdev_def with a custom command 00:03:27.454 [569/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:27.454 [570/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:27.454 [571/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:27.454 [572/740] Generating drivers/rte_mempool_ring_def with a custom command 00:03:27.454 [573/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:27.454 [574/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:27.454 [575/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:27.454 [576/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:27.454 [577/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:27.454 [578/740] Linking static target lib/librte_node.a 00:03:27.735 [579/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:27.735 [580/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:27.735 [581/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.735 [582/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:27.735 [583/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:28.008 [584/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:28.008 [585/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:28.008 [586/740] Linking static target drivers/librte_bus_pci.a 00:03:28.008 [587/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:28.008 [588/740] Linking static target drivers/librte_bus_vdev.a 00:03:28.008 [589/740] Linking target lib/librte_node.so.23.0 00:03:28.008 [590/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:28.008 [591/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.008 [592/740] Linking target drivers/librte_bus_vdev.so.23.0 00:03:28.008 [593/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:28.008 [594/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:28.267 [595/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.267 [596/740] Linking target drivers/librte_bus_pci.so.23.0 00:03:28.267 [597/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:28.267 [598/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:28.267 [599/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:28.267 [600/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:28.267 [601/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:28.526 [602/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:28.526 [603/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:28.526 [604/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:28.526 [605/740] Linking static target drivers/librte_mempool_ring.a 00:03:28.526 [606/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:28.526 [607/740] Linking target drivers/librte_mempool_ring.so.23.0 00:03:28.786 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:29.046 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:29.046 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:29.046 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:29.306 [612/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:29.565 [613/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:29.825 [614/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:29.825 [615/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:30.085 [616/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:30.085 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:30.085 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:30.085 [619/740] Generating drivers/rte_net_i40e_def with a custom command 00:03:30.085 [620/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:30.085 [621/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:30.345 [622/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:30.914 [623/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:31.174 [624/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:31.174 [625/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:31.174 [626/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:31.174 [627/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:31.434 [628/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:31.434 [629/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:31.434 [630/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:31.434 [631/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:31.694 [632/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:31.694 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:31.954 [634/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:31.954 [635/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:31.954 [636/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:31.954 [637/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:32.215 [638/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:32.215 [639/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:32.475 [640/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:32.475 [641/740] Linking static target drivers/librte_net_i40e.a 00:03:32.475 [642/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:32.475 [643/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:32.475 [644/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:32.475 [645/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:32.475 [646/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:32.735 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:32.735 [648/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:32.995 [649/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:32.995 [650/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.995 [651/740] Linking target drivers/librte_net_i40e.so.23.0 00:03:32.995 [652/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:33.255 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:33.255 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:33.255 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:33.255 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:33.255 [657/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:33.515 [658/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:33.515 [659/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:33.515 [660/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:33.515 [661/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:33.775 [662/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:33.775 [663/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:34.035 [664/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:34.035 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:34.035 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:34.295 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:34.555 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:34.555 [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:34.555 [670/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:34.814 [671/740] Linking static target lib/librte_vhost.a 00:03:34.814 [672/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:34.814 [673/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:34.814 [674/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:35.073 [675/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:35.074 [676/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:35.074 [677/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:35.333 [678/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:35.333 [679/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:35.333 [680/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:35.333 [681/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:35.333 [682/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:35.333 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:35.594 [684/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.594 [685/740] Linking target lib/librte_vhost.so.23.0 00:03:35.854 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:35.854 [687/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:35.854 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:35.854 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:35.854 [690/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:35.854 [691/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:36.114 [692/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:36.114 [693/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:36.375 [694/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:36.375 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:36.375 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:36.635 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:36.635 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:36.895 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:36.895 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:36.895 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:37.155 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:37.155 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:37.415 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:37.415 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:37.675 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:37.675 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:37.935 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:37.935 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:38.195 [710/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:38.195 [711/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:38.195 [712/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:38.195 [713/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:38.455 [714/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:38.455 [715/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:38.455 [716/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:38.455 [717/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:38.715 [718/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:38.974 [719/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:41.522 [720/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:41.522 [721/740] Linking static target lib/librte_pipeline.a 00:03:41.522 [722/740] Linking target app/dpdk-test-cmdline 00:03:41.522 [723/740] Linking target app/dpdk-pdump 00:03:41.522 [724/740] Linking target app/dpdk-test-compress-perf 00:03:41.522 [725/740] Linking target app/dpdk-proc-info 00:03:41.522 [726/740] Linking target app/dpdk-dumpcap 00:03:41.522 [727/740] Linking target app/dpdk-test-acl 00:03:41.522 [728/740] Linking target app/dpdk-test-bbdev 00:03:41.522 [729/740] Linking target app/dpdk-test-crypto-perf 00:03:41.522 [730/740] Linking target app/dpdk-test-eventdev 00:03:41.781 [731/740] Linking target app/dpdk-test-fib 00:03:41.781 [732/740] Linking target app/dpdk-test-pipeline 00:03:41.781 [733/740] Linking target app/dpdk-test-gpudev 00:03:41.781 [734/740] Linking target app/dpdk-test-flow-perf 00:03:41.781 [735/740] Linking target app/dpdk-test-regex 00:03:42.041 [736/740] Linking target app/dpdk-test-security-perf 00:03:42.041 [737/740] Linking target app/dpdk-test-sad 00:03:42.041 [738/740] Linking target app/dpdk-testpmd 00:03:46.243 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.503 [740/740] Linking target lib/librte_pipeline.so.23.0 00:03:46.503 02:18:05 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:46.503 02:18:05 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:46.503 02:18:05 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:46.503 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:46.503 [0/1] Installing files. 00:03:46.766 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.767 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.768 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.769 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:46.770 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:46.771 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:46.771 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:46.771 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:46.771 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:46.771 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.771 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.771 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.771 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.771 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.771 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.771 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.771 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.771 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.771 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.771 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.771 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.771 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.771 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.771 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.032 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:47.033 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:47.033 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:47.033 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.033 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:47.033 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.033 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.033 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.033 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.033 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.033 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.033 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.033 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.033 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.033 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.033 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.033 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.033 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.033 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.033 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.033 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.033 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.033 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.034 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.034 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.034 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.034 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.034 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.034 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.034 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.034 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.034 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.034 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.034 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.296 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.297 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:47.298 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:47.298 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:47.298 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:47.298 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:47.298 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:47.298 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:47.298 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:47.298 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:47.298 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:47.298 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:47.298 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:47.298 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:47.298 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:47.298 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:47.298 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:47.298 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:47.299 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:47.299 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:47.299 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:47.299 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:47.299 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:47.299 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:47.299 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:47.299 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:47.299 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:47.299 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:47.299 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:47.299 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:47.299 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:47.299 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:47.299 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:47.299 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:47.299 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:47.299 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:47.299 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:47.299 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:47.299 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:47.299 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:47.299 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:47.299 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:47.299 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:47.299 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:47.299 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:47.299 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:47.299 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:47.299 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:47.299 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:47.299 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:47.299 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:47.299 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:47.299 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:47.299 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:47.299 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:47.299 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:47.299 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:47.299 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:47.299 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:47.299 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:47.299 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:47.299 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:47.299 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:47.299 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:47.299 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:47.299 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:47.299 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:47.299 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:47.299 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:47.299 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:47.299 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:47.299 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:47.299 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:47.299 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:47.299 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:47.299 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:47.299 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:47.299 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:47.299 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:47.299 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:47.299 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:47.299 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:47.299 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:47.299 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:47.299 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:47.299 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:47.299 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:47.299 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:47.299 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:47.299 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:47.299 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:47.299 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:47.299 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:47.299 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:47.299 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:47.299 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:47.299 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:47.299 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:47.299 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:47.299 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:47.299 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:47.299 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:47.299 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:47.299 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:47.299 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:47.299 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:47.299 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:47.299 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:47.299 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:47.299 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:47.299 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:47.299 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:47.299 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:47.299 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:47.299 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:47.299 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:47.299 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:47.299 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:47.299 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:47.299 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:47.299 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:47.299 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:47.299 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:47.299 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:47.299 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:47.299 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:47.299 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:47.299 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:47.299 02:18:05 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:47.299 02:18:05 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:47.299 00:03:47.299 real 0m47.655s 00:03:47.299 user 4m31.164s 00:03:47.299 sys 0m55.286s 00:03:47.299 02:18:05 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:47.300 02:18:05 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:47.300 ************************************ 00:03:47.300 END TEST build_native_dpdk 00:03:47.300 ************************************ 00:03:47.300 02:18:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:47.300 02:18:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:47.300 02:18:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:47.300 02:18:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:47.300 02:18:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:47.300 02:18:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:47.300 02:18:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:47.300 02:18:05 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:47.559 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:47.559 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.559 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:47.559 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:48.128 Using 'verbs' RDMA provider 00:04:03.962 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:22.096 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:22.096 Creating mk/config.mk...done. 00:04:22.096 Creating mk/cc.flags.mk...done. 00:04:22.096 Type 'make' to build. 00:04:22.096 02:18:38 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:22.096 02:18:38 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:22.096 02:18:38 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:22.096 02:18:38 -- common/autotest_common.sh@10 -- $ set +x 00:04:22.096 ************************************ 00:04:22.096 START TEST make 00:04:22.096 ************************************ 00:04:22.096 02:18:38 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:22.096 make[1]: Nothing to be done for 'all'. 00:05:08.792 CC lib/log/log.o 00:05:08.792 CC lib/log/log_flags.o 00:05:08.792 CC lib/log/log_deprecated.o 00:05:08.792 CC lib/ut_mock/mock.o 00:05:08.792 CC lib/ut/ut.o 00:05:08.792 LIB libspdk_log.a 00:05:08.792 LIB libspdk_ut_mock.a 00:05:08.792 LIB libspdk_ut.a 00:05:08.792 SO libspdk_log.so.7.0 00:05:08.792 SO libspdk_ut_mock.so.6.0 00:05:08.792 SO libspdk_ut.so.2.0 00:05:08.792 SYMLINK libspdk_ut_mock.so 00:05:08.792 SYMLINK libspdk_log.so 00:05:08.792 SYMLINK libspdk_ut.so 00:05:08.792 CC lib/ioat/ioat.o 00:05:08.792 CC lib/util/bit_array.o 00:05:08.792 CC lib/util/base64.o 00:05:08.792 CC lib/util/cpuset.o 00:05:08.792 CC lib/util/crc16.o 00:05:08.792 CC lib/util/crc32c.o 00:05:08.792 CXX lib/trace_parser/trace.o 00:05:08.792 CC lib/util/crc32.o 00:05:08.792 CC lib/dma/dma.o 00:05:08.792 CC lib/vfio_user/host/vfio_user_pci.o 00:05:08.792 CC lib/util/crc32_ieee.o 00:05:08.792 CC lib/util/crc64.o 00:05:08.792 CC lib/util/dif.o 00:05:08.792 CC lib/util/fd.o 00:05:08.792 CC lib/util/fd_group.o 00:05:08.792 LIB libspdk_dma.a 00:05:08.792 CC lib/util/file.o 00:05:08.792 SO libspdk_dma.so.5.0 00:05:08.792 CC lib/vfio_user/host/vfio_user.o 00:05:08.792 CC lib/util/hexlify.o 00:05:08.792 CC lib/util/iov.o 00:05:08.792 LIB libspdk_ioat.a 00:05:08.792 SYMLINK libspdk_dma.so 00:05:08.792 CC lib/util/math.o 00:05:08.792 CC lib/util/net.o 00:05:08.792 SO libspdk_ioat.so.7.0 00:05:08.792 CC lib/util/pipe.o 00:05:08.792 SYMLINK libspdk_ioat.so 00:05:08.792 CC lib/util/strerror_tls.o 00:05:08.792 CC lib/util/string.o 00:05:08.792 CC lib/util/uuid.o 00:05:08.792 CC lib/util/xor.o 00:05:08.792 CC lib/util/zipf.o 00:05:08.792 LIB libspdk_vfio_user.a 00:05:08.792 CC lib/util/md5.o 00:05:08.792 SO libspdk_vfio_user.so.5.0 00:05:08.792 SYMLINK libspdk_vfio_user.so 00:05:08.792 LIB libspdk_util.a 00:05:09.051 SO libspdk_util.so.10.0 00:05:09.051 LIB libspdk_trace_parser.a 00:05:09.051 SYMLINK libspdk_util.so 00:05:09.051 SO libspdk_trace_parser.so.6.0 00:05:09.311 SYMLINK libspdk_trace_parser.so 00:05:09.311 CC lib/json/json_parse.o 00:05:09.311 CC lib/json/json_util.o 00:05:09.311 CC lib/rdma_provider/common.o 00:05:09.311 CC lib/json/json_write.o 00:05:09.311 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:09.311 CC lib/idxd/idxd.o 00:05:09.311 CC lib/rdma_utils/rdma_utils.o 00:05:09.311 CC lib/conf/conf.o 00:05:09.311 CC lib/vmd/vmd.o 00:05:09.311 CC lib/env_dpdk/env.o 00:05:09.569 CC lib/env_dpdk/memory.o 00:05:09.569 LIB libspdk_rdma_provider.a 00:05:09.569 SO libspdk_rdma_provider.so.6.0 00:05:09.569 LIB libspdk_conf.a 00:05:09.569 CC lib/env_dpdk/pci.o 00:05:09.569 SYMLINK libspdk_rdma_provider.so 00:05:09.569 CC lib/env_dpdk/init.o 00:05:09.569 CC lib/env_dpdk/threads.o 00:05:09.569 SO libspdk_conf.so.6.0 00:05:09.569 LIB libspdk_rdma_utils.a 00:05:09.569 LIB libspdk_json.a 00:05:09.569 SO libspdk_rdma_utils.so.1.0 00:05:09.569 SO libspdk_json.so.6.0 00:05:09.569 SYMLINK libspdk_conf.so 00:05:09.829 CC lib/env_dpdk/pci_ioat.o 00:05:09.829 SYMLINK libspdk_rdma_utils.so 00:05:09.829 CC lib/env_dpdk/pci_virtio.o 00:05:09.829 SYMLINK libspdk_json.so 00:05:09.829 CC lib/env_dpdk/pci_vmd.o 00:05:09.829 CC lib/env_dpdk/pci_idxd.o 00:05:09.829 CC lib/vmd/led.o 00:05:09.829 CC lib/env_dpdk/pci_event.o 00:05:09.829 CC lib/env_dpdk/sigbus_handler.o 00:05:09.829 CC lib/env_dpdk/pci_dpdk.o 00:05:10.089 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:10.089 CC lib/jsonrpc/jsonrpc_server.o 00:05:10.089 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:10.089 CC lib/idxd/idxd_user.o 00:05:10.089 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:10.089 CC lib/idxd/idxd_kernel.o 00:05:10.089 CC lib/jsonrpc/jsonrpc_client.o 00:05:10.089 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:10.089 LIB libspdk_vmd.a 00:05:10.089 SO libspdk_vmd.so.6.0 00:05:10.349 SYMLINK libspdk_vmd.so 00:05:10.349 LIB libspdk_idxd.a 00:05:10.349 LIB libspdk_jsonrpc.a 00:05:10.349 SO libspdk_jsonrpc.so.6.0 00:05:10.349 SO libspdk_idxd.so.12.1 00:05:10.609 SYMLINK libspdk_jsonrpc.so 00:05:10.609 SYMLINK libspdk_idxd.so 00:05:10.868 CC lib/rpc/rpc.o 00:05:11.127 LIB libspdk_rpc.a 00:05:11.127 SO libspdk_rpc.so.6.0 00:05:11.127 SYMLINK libspdk_rpc.so 00:05:11.386 LIB libspdk_env_dpdk.a 00:05:11.386 SO libspdk_env_dpdk.so.15.0 00:05:11.645 SYMLINK libspdk_env_dpdk.so 00:05:11.645 CC lib/keyring/keyring_rpc.o 00:05:11.645 CC lib/keyring/keyring.o 00:05:11.645 CC lib/notify/notify_rpc.o 00:05:11.645 CC lib/notify/notify.o 00:05:11.645 CC lib/trace/trace_flags.o 00:05:11.645 CC lib/trace/trace.o 00:05:11.645 CC lib/trace/trace_rpc.o 00:05:11.645 LIB libspdk_notify.a 00:05:11.904 LIB libspdk_keyring.a 00:05:11.904 SO libspdk_notify.so.6.0 00:05:11.904 SO libspdk_keyring.so.2.0 00:05:11.904 SYMLINK libspdk_notify.so 00:05:11.904 LIB libspdk_trace.a 00:05:11.904 SYMLINK libspdk_keyring.so 00:05:11.904 SO libspdk_trace.so.11.0 00:05:12.163 SYMLINK libspdk_trace.so 00:05:12.422 CC lib/sock/sock.o 00:05:12.422 CC lib/thread/iobuf.o 00:05:12.422 CC lib/thread/thread.o 00:05:12.422 CC lib/sock/sock_rpc.o 00:05:12.992 LIB libspdk_sock.a 00:05:12.992 SO libspdk_sock.so.10.0 00:05:12.992 SYMLINK libspdk_sock.so 00:05:13.563 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:13.563 CC lib/nvme/nvme_ctrlr.o 00:05:13.563 CC lib/nvme/nvme_fabric.o 00:05:13.563 CC lib/nvme/nvme_ns_cmd.o 00:05:13.563 CC lib/nvme/nvme_pcie_common.o 00:05:13.563 CC lib/nvme/nvme_ns.o 00:05:13.563 CC lib/nvme/nvme_pcie.o 00:05:13.563 CC lib/nvme/nvme.o 00:05:13.563 CC lib/nvme/nvme_qpair.o 00:05:14.137 CC lib/nvme/nvme_quirks.o 00:05:14.137 CC lib/nvme/nvme_transport.o 00:05:14.137 LIB libspdk_thread.a 00:05:14.137 SO libspdk_thread.so.10.1 00:05:14.137 CC lib/nvme/nvme_discovery.o 00:05:14.137 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:14.404 SYMLINK libspdk_thread.so 00:05:14.404 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:14.404 CC lib/nvme/nvme_tcp.o 00:05:14.404 CC lib/nvme/nvme_opal.o 00:05:14.404 CC lib/nvme/nvme_io_msg.o 00:05:14.664 CC lib/nvme/nvme_poll_group.o 00:05:14.664 CC lib/nvme/nvme_zns.o 00:05:14.664 CC lib/nvme/nvme_stubs.o 00:05:14.924 CC lib/nvme/nvme_auth.o 00:05:15.184 CC lib/nvme/nvme_cuse.o 00:05:15.184 CC lib/accel/accel.o 00:05:15.184 CC lib/nvme/nvme_rdma.o 00:05:15.184 CC lib/blob/blobstore.o 00:05:15.444 CC lib/blob/request.o 00:05:15.444 CC lib/blob/zeroes.o 00:05:15.444 CC lib/init/json_config.o 00:05:15.704 CC lib/blob/blob_bs_dev.o 00:05:15.704 CC lib/init/subsystem.o 00:05:15.964 CC lib/virtio/virtio.o 00:05:15.964 CC lib/init/subsystem_rpc.o 00:05:15.964 CC lib/init/rpc.o 00:05:15.964 CC lib/virtio/virtio_vhost_user.o 00:05:15.964 CC lib/fsdev/fsdev.o 00:05:16.224 CC lib/virtio/virtio_vfio_user.o 00:05:16.224 CC lib/fsdev/fsdev_io.o 00:05:16.224 LIB libspdk_init.a 00:05:16.224 SO libspdk_init.so.6.0 00:05:16.224 SYMLINK libspdk_init.so 00:05:16.224 CC lib/fsdev/fsdev_rpc.o 00:05:16.224 CC lib/virtio/virtio_pci.o 00:05:16.224 CC lib/accel/accel_rpc.o 00:05:16.485 CC lib/accel/accel_sw.o 00:05:16.745 LIB libspdk_virtio.a 00:05:16.745 CC lib/event/app.o 00:05:16.745 CC lib/event/reactor.o 00:05:16.745 CC lib/event/log_rpc.o 00:05:16.745 CC lib/event/app_rpc.o 00:05:16.745 CC lib/event/scheduler_static.o 00:05:16.745 SO libspdk_virtio.so.7.0 00:05:16.745 LIB libspdk_accel.a 00:05:16.745 SYMLINK libspdk_virtio.so 00:05:16.745 SO libspdk_accel.so.16.0 00:05:16.745 LIB libspdk_nvme.a 00:05:17.004 SYMLINK libspdk_accel.so 00:05:17.004 LIB libspdk_fsdev.a 00:05:17.004 SO libspdk_fsdev.so.1.0 00:05:17.004 SO libspdk_nvme.so.14.0 00:05:17.004 SYMLINK libspdk_fsdev.so 00:05:17.264 CC lib/bdev/bdev_rpc.o 00:05:17.264 CC lib/bdev/bdev_zone.o 00:05:17.264 CC lib/bdev/bdev.o 00:05:17.264 CC lib/bdev/scsi_nvme.o 00:05:17.264 CC lib/bdev/part.o 00:05:17.264 LIB libspdk_event.a 00:05:17.264 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:17.264 SYMLINK libspdk_nvme.so 00:05:17.264 SO libspdk_event.so.14.0 00:05:17.525 SYMLINK libspdk_event.so 00:05:18.096 LIB libspdk_fuse_dispatcher.a 00:05:18.096 SO libspdk_fuse_dispatcher.so.1.0 00:05:18.096 SYMLINK libspdk_fuse_dispatcher.so 00:05:19.474 LIB libspdk_blob.a 00:05:19.474 SO libspdk_blob.so.11.0 00:05:19.474 SYMLINK libspdk_blob.so 00:05:20.044 CC lib/lvol/lvol.o 00:05:20.044 CC lib/blobfs/blobfs.o 00:05:20.044 CC lib/blobfs/tree.o 00:05:20.304 LIB libspdk_bdev.a 00:05:20.304 SO libspdk_bdev.so.16.0 00:05:20.570 SYMLINK libspdk_bdev.so 00:05:20.866 CC lib/scsi/dev.o 00:05:20.866 CC lib/scsi/port.o 00:05:20.866 CC lib/scsi/lun.o 00:05:20.866 CC lib/scsi/scsi.o 00:05:20.866 CC lib/ublk/ublk.o 00:05:20.866 CC lib/nvmf/ctrlr.o 00:05:20.866 CC lib/ftl/ftl_core.o 00:05:20.866 CC lib/nbd/nbd.o 00:05:20.866 LIB libspdk_blobfs.a 00:05:20.866 SO libspdk_blobfs.so.10.0 00:05:20.866 CC lib/scsi/scsi_bdev.o 00:05:20.866 CC lib/scsi/scsi_pr.o 00:05:20.866 SYMLINK libspdk_blobfs.so 00:05:20.866 CC lib/scsi/scsi_rpc.o 00:05:20.866 CC lib/scsi/task.o 00:05:20.866 CC lib/nbd/nbd_rpc.o 00:05:20.866 LIB libspdk_lvol.a 00:05:21.124 SO libspdk_lvol.so.10.0 00:05:21.124 CC lib/ftl/ftl_init.o 00:05:21.124 SYMLINK libspdk_lvol.so 00:05:21.124 CC lib/ublk/ublk_rpc.o 00:05:21.124 CC lib/ftl/ftl_layout.o 00:05:21.124 LIB libspdk_nbd.a 00:05:21.124 CC lib/ftl/ftl_debug.o 00:05:21.124 SO libspdk_nbd.so.7.0 00:05:21.124 CC lib/nvmf/ctrlr_discovery.o 00:05:21.124 SYMLINK libspdk_nbd.so 00:05:21.124 CC lib/ftl/ftl_io.o 00:05:21.124 CC lib/ftl/ftl_sb.o 00:05:21.124 CC lib/ftl/ftl_l2p.o 00:05:21.383 CC lib/ftl/ftl_l2p_flat.o 00:05:21.383 CC lib/nvmf/ctrlr_bdev.o 00:05:21.383 CC lib/ftl/ftl_nv_cache.o 00:05:21.383 LIB libspdk_ublk.a 00:05:21.383 CC lib/ftl/ftl_band.o 00:05:21.383 LIB libspdk_scsi.a 00:05:21.383 SO libspdk_ublk.so.3.0 00:05:21.383 CC lib/ftl/ftl_band_ops.o 00:05:21.383 SO libspdk_scsi.so.9.0 00:05:21.383 CC lib/nvmf/subsystem.o 00:05:21.383 CC lib/nvmf/nvmf.o 00:05:21.643 SYMLINK libspdk_ublk.so 00:05:21.643 CC lib/nvmf/nvmf_rpc.o 00:05:21.643 SYMLINK libspdk_scsi.so 00:05:21.643 CC lib/nvmf/transport.o 00:05:21.643 CC lib/nvmf/tcp.o 00:05:21.643 CC lib/ftl/ftl_writer.o 00:05:21.903 CC lib/iscsi/conn.o 00:05:21.903 CC lib/nvmf/stubs.o 00:05:22.162 CC lib/nvmf/mdns_server.o 00:05:22.422 CC lib/nvmf/rdma.o 00:05:22.422 CC lib/ftl/ftl_rq.o 00:05:22.422 CC lib/nvmf/auth.o 00:05:22.422 CC lib/iscsi/init_grp.o 00:05:22.422 CC lib/ftl/ftl_reloc.o 00:05:22.422 CC lib/iscsi/iscsi.o 00:05:22.422 CC lib/vhost/vhost.o 00:05:22.681 CC lib/vhost/vhost_rpc.o 00:05:22.681 CC lib/vhost/vhost_scsi.o 00:05:22.681 CC lib/vhost/vhost_blk.o 00:05:22.681 CC lib/vhost/rte_vhost_user.o 00:05:22.941 CC lib/ftl/ftl_l2p_cache.o 00:05:23.200 CC lib/iscsi/param.o 00:05:23.200 CC lib/iscsi/portal_grp.o 00:05:23.200 CC lib/iscsi/tgt_node.o 00:05:23.200 CC lib/iscsi/iscsi_subsystem.o 00:05:23.200 CC lib/ftl/ftl_p2l.o 00:05:23.460 CC lib/iscsi/iscsi_rpc.o 00:05:23.460 CC lib/iscsi/task.o 00:05:23.460 CC lib/ftl/ftl_p2l_log.o 00:05:23.719 CC lib/ftl/mngt/ftl_mngt.o 00:05:23.719 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:23.719 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:23.719 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:23.719 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:23.719 LIB libspdk_vhost.a 00:05:23.719 SO libspdk_vhost.so.8.0 00:05:23.719 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:23.719 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:23.719 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:23.719 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:23.719 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:23.979 SYMLINK libspdk_vhost.so 00:05:23.979 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:23.979 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:23.979 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:23.979 CC lib/ftl/utils/ftl_conf.o 00:05:23.979 CC lib/ftl/utils/ftl_md.o 00:05:23.979 CC lib/ftl/utils/ftl_mempool.o 00:05:23.979 CC lib/ftl/utils/ftl_bitmap.o 00:05:23.979 LIB libspdk_iscsi.a 00:05:23.979 CC lib/ftl/utils/ftl_property.o 00:05:23.979 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:24.238 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:24.238 SO libspdk_iscsi.so.8.0 00:05:24.238 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:24.238 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:24.238 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:24.238 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:24.238 SYMLINK libspdk_iscsi.so 00:05:24.238 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:24.238 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:24.238 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:24.238 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:24.238 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:24.238 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:24.498 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:24.498 CC lib/ftl/base/ftl_base_dev.o 00:05:24.498 CC lib/ftl/base/ftl_base_bdev.o 00:05:24.498 CC lib/ftl/ftl_trace.o 00:05:24.498 LIB libspdk_nvmf.a 00:05:24.759 LIB libspdk_ftl.a 00:05:24.759 SO libspdk_nvmf.so.19.0 00:05:25.021 SO libspdk_ftl.so.9.0 00:05:25.021 SYMLINK libspdk_nvmf.so 00:05:25.281 SYMLINK libspdk_ftl.so 00:05:25.541 CC module/env_dpdk/env_dpdk_rpc.o 00:05:25.541 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:25.541 CC module/accel/dsa/accel_dsa.o 00:05:25.541 CC module/keyring/file/keyring.o 00:05:25.541 CC module/sock/posix/posix.o 00:05:25.541 CC module/fsdev/aio/fsdev_aio.o 00:05:25.541 CC module/accel/ioat/accel_ioat.o 00:05:25.541 CC module/accel/iaa/accel_iaa.o 00:05:25.801 CC module/blob/bdev/blob_bdev.o 00:05:25.801 CC module/accel/error/accel_error.o 00:05:25.801 LIB libspdk_env_dpdk_rpc.a 00:05:25.801 SO libspdk_env_dpdk_rpc.so.6.0 00:05:25.801 SYMLINK libspdk_env_dpdk_rpc.so 00:05:25.801 CC module/accel/iaa/accel_iaa_rpc.o 00:05:25.801 CC module/keyring/file/keyring_rpc.o 00:05:25.801 CC module/accel/ioat/accel_ioat_rpc.o 00:05:25.801 CC module/accel/error/accel_error_rpc.o 00:05:25.801 LIB libspdk_scheduler_dynamic.a 00:05:25.801 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:25.801 SO libspdk_scheduler_dynamic.so.4.0 00:05:25.801 LIB libspdk_accel_iaa.a 00:05:25.801 LIB libspdk_keyring_file.a 00:05:26.062 SO libspdk_keyring_file.so.2.0 00:05:26.062 LIB libspdk_blob_bdev.a 00:05:26.062 SO libspdk_accel_iaa.so.3.0 00:05:26.062 CC module/accel/dsa/accel_dsa_rpc.o 00:05:26.062 SYMLINK libspdk_scheduler_dynamic.so 00:05:26.062 SO libspdk_blob_bdev.so.11.0 00:05:26.062 LIB libspdk_accel_ioat.a 00:05:26.062 LIB libspdk_accel_error.a 00:05:26.062 SYMLINK libspdk_keyring_file.so 00:05:26.062 SYMLINK libspdk_accel_iaa.so 00:05:26.062 SO libspdk_accel_ioat.so.6.0 00:05:26.062 SO libspdk_accel_error.so.2.0 00:05:26.062 CC module/fsdev/aio/linux_aio_mgr.o 00:05:26.062 SYMLINK libspdk_blob_bdev.so 00:05:26.062 SYMLINK libspdk_accel_error.so 00:05:26.062 SYMLINK libspdk_accel_ioat.so 00:05:26.062 LIB libspdk_accel_dsa.a 00:05:26.062 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:26.062 SO libspdk_accel_dsa.so.5.0 00:05:26.062 CC module/keyring/linux/keyring.o 00:05:26.062 SYMLINK libspdk_accel_dsa.so 00:05:26.062 CC module/scheduler/gscheduler/gscheduler.o 00:05:26.062 CC module/keyring/linux/keyring_rpc.o 00:05:26.322 CC module/bdev/delay/vbdev_delay.o 00:05:26.322 LIB libspdk_scheduler_dpdk_governor.a 00:05:26.322 CC module/bdev/error/vbdev_error.o 00:05:26.322 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:26.322 CC module/blobfs/bdev/blobfs_bdev.o 00:05:26.322 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:26.322 LIB libspdk_keyring_linux.a 00:05:26.322 LIB libspdk_scheduler_gscheduler.a 00:05:26.322 LIB libspdk_fsdev_aio.a 00:05:26.322 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:26.322 CC module/bdev/gpt/gpt.o 00:05:26.322 CC module/bdev/error/vbdev_error_rpc.o 00:05:26.322 SO libspdk_keyring_linux.so.1.0 00:05:26.322 SO libspdk_scheduler_gscheduler.so.4.0 00:05:26.322 SO libspdk_fsdev_aio.so.1.0 00:05:26.322 SYMLINK libspdk_scheduler_gscheduler.so 00:05:26.322 SYMLINK libspdk_keyring_linux.so 00:05:26.322 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:26.322 LIB libspdk_sock_posix.a 00:05:26.322 CC module/bdev/gpt/vbdev_gpt.o 00:05:26.322 SYMLINK libspdk_fsdev_aio.so 00:05:26.322 SO libspdk_sock_posix.so.6.0 00:05:26.582 LIB libspdk_bdev_error.a 00:05:26.582 SYMLINK libspdk_sock_posix.so 00:05:26.582 SO libspdk_bdev_error.so.6.0 00:05:26.582 LIB libspdk_blobfs_bdev.a 00:05:26.582 CC module/bdev/lvol/vbdev_lvol.o 00:05:26.582 SO libspdk_blobfs_bdev.so.6.0 00:05:26.582 LIB libspdk_bdev_delay.a 00:05:26.582 SYMLINK libspdk_bdev_error.so 00:05:26.582 CC module/bdev/null/bdev_null.o 00:05:26.582 CC module/bdev/malloc/bdev_malloc.o 00:05:26.582 SO libspdk_bdev_delay.so.6.0 00:05:26.582 SYMLINK libspdk_blobfs_bdev.so 00:05:26.582 CC module/bdev/nvme/bdev_nvme.o 00:05:26.582 CC module/bdev/null/bdev_null_rpc.o 00:05:26.582 CC module/bdev/passthru/vbdev_passthru.o 00:05:26.582 LIB libspdk_bdev_gpt.a 00:05:26.582 CC module/bdev/raid/bdev_raid.o 00:05:26.582 SYMLINK libspdk_bdev_delay.so 00:05:26.582 CC module/bdev/raid/bdev_raid_rpc.o 00:05:26.582 SO libspdk_bdev_gpt.so.6.0 00:05:26.842 SYMLINK libspdk_bdev_gpt.so 00:05:26.842 CC module/bdev/raid/bdev_raid_sb.o 00:05:26.842 CC module/bdev/split/vbdev_split.o 00:05:26.842 CC module/bdev/split/vbdev_split_rpc.o 00:05:26.842 LIB libspdk_bdev_null.a 00:05:26.842 SO libspdk_bdev_null.so.6.0 00:05:26.842 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:26.842 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:26.842 SYMLINK libspdk_bdev_null.so 00:05:26.842 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:27.102 LIB libspdk_bdev_split.a 00:05:27.102 SO libspdk_bdev_split.so.6.0 00:05:27.102 CC module/bdev/raid/raid0.o 00:05:27.102 LIB libspdk_bdev_passthru.a 00:05:27.102 SYMLINK libspdk_bdev_split.so 00:05:27.102 SO libspdk_bdev_passthru.so.6.0 00:05:27.102 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:27.102 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:27.102 LIB libspdk_bdev_malloc.a 00:05:27.102 SYMLINK libspdk_bdev_passthru.so 00:05:27.102 SO libspdk_bdev_malloc.so.6.0 00:05:27.102 CC module/bdev/aio/bdev_aio.o 00:05:27.362 SYMLINK libspdk_bdev_malloc.so 00:05:27.362 CC module/bdev/aio/bdev_aio_rpc.o 00:05:27.362 CC module/bdev/ftl/bdev_ftl.o 00:05:27.362 CC module/bdev/nvme/nvme_rpc.o 00:05:27.362 CC module/bdev/iscsi/bdev_iscsi.o 00:05:27.362 LIB libspdk_bdev_lvol.a 00:05:27.362 SO libspdk_bdev_lvol.so.6.0 00:05:27.362 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:27.362 SYMLINK libspdk_bdev_lvol.so 00:05:27.362 CC module/bdev/nvme/bdev_mdns_client.o 00:05:27.362 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:27.622 LIB libspdk_bdev_aio.a 00:05:27.622 CC module/bdev/nvme/vbdev_opal.o 00:05:27.622 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:27.622 SO libspdk_bdev_aio.so.6.0 00:05:27.622 LIB libspdk_bdev_zone_block.a 00:05:27.622 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:27.622 SO libspdk_bdev_zone_block.so.6.0 00:05:27.622 SYMLINK libspdk_bdev_aio.so 00:05:27.622 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:27.622 SYMLINK libspdk_bdev_zone_block.so 00:05:27.622 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:27.622 CC module/bdev/raid/raid1.o 00:05:27.622 LIB libspdk_bdev_iscsi.a 00:05:27.622 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:27.622 SO libspdk_bdev_iscsi.so.6.0 00:05:27.622 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:27.622 LIB libspdk_bdev_ftl.a 00:05:27.622 SYMLINK libspdk_bdev_iscsi.so 00:05:27.622 CC module/bdev/raid/concat.o 00:05:27.882 SO libspdk_bdev_ftl.so.6.0 00:05:27.882 CC module/bdev/raid/raid5f.o 00:05:27.882 SYMLINK libspdk_bdev_ftl.so 00:05:28.141 LIB libspdk_bdev_virtio.a 00:05:28.142 SO libspdk_bdev_virtio.so.6.0 00:05:28.142 LIB libspdk_bdev_raid.a 00:05:28.401 SYMLINK libspdk_bdev_virtio.so 00:05:28.401 SO libspdk_bdev_raid.so.6.0 00:05:28.401 SYMLINK libspdk_bdev_raid.so 00:05:28.971 LIB libspdk_bdev_nvme.a 00:05:28.971 SO libspdk_bdev_nvme.so.7.0 00:05:29.231 SYMLINK libspdk_bdev_nvme.so 00:05:29.809 CC module/event/subsystems/vmd/vmd.o 00:05:29.809 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:29.809 CC module/event/subsystems/iobuf/iobuf.o 00:05:29.809 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:29.809 CC module/event/subsystems/keyring/keyring.o 00:05:29.809 CC module/event/subsystems/scheduler/scheduler.o 00:05:29.809 CC module/event/subsystems/sock/sock.o 00:05:29.809 CC module/event/subsystems/fsdev/fsdev.o 00:05:29.809 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:29.809 LIB libspdk_event_fsdev.a 00:05:29.809 LIB libspdk_event_vhost_blk.a 00:05:29.809 LIB libspdk_event_scheduler.a 00:05:29.809 LIB libspdk_event_vmd.a 00:05:29.809 LIB libspdk_event_sock.a 00:05:29.809 LIB libspdk_event_iobuf.a 00:05:29.809 LIB libspdk_event_keyring.a 00:05:30.069 SO libspdk_event_vhost_blk.so.3.0 00:05:30.069 SO libspdk_event_sock.so.5.0 00:05:30.069 SO libspdk_event_scheduler.so.4.0 00:05:30.069 SO libspdk_event_fsdev.so.1.0 00:05:30.069 SO libspdk_event_vmd.so.6.0 00:05:30.069 SO libspdk_event_keyring.so.1.0 00:05:30.069 SO libspdk_event_iobuf.so.3.0 00:05:30.069 SYMLINK libspdk_event_vhost_blk.so 00:05:30.069 SYMLINK libspdk_event_sock.so 00:05:30.069 SYMLINK libspdk_event_scheduler.so 00:05:30.069 SYMLINK libspdk_event_fsdev.so 00:05:30.069 SYMLINK libspdk_event_keyring.so 00:05:30.069 SYMLINK libspdk_event_vmd.so 00:05:30.069 SYMLINK libspdk_event_iobuf.so 00:05:30.329 CC module/event/subsystems/accel/accel.o 00:05:30.596 LIB libspdk_event_accel.a 00:05:30.596 SO libspdk_event_accel.so.6.0 00:05:30.596 SYMLINK libspdk_event_accel.so 00:05:31.174 CC module/event/subsystems/bdev/bdev.o 00:05:31.174 LIB libspdk_event_bdev.a 00:05:31.174 SO libspdk_event_bdev.so.6.0 00:05:31.433 SYMLINK libspdk_event_bdev.so 00:05:31.692 CC module/event/subsystems/ublk/ublk.o 00:05:31.692 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:31.692 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:31.692 CC module/event/subsystems/nbd/nbd.o 00:05:31.692 CC module/event/subsystems/scsi/scsi.o 00:05:31.692 LIB libspdk_event_ublk.a 00:05:31.692 LIB libspdk_event_nbd.a 00:05:31.950 SO libspdk_event_ublk.so.3.0 00:05:31.950 LIB libspdk_event_scsi.a 00:05:31.950 SO libspdk_event_nbd.so.6.0 00:05:31.950 SO libspdk_event_scsi.so.6.0 00:05:31.950 SYMLINK libspdk_event_ublk.so 00:05:31.950 SYMLINK libspdk_event_nbd.so 00:05:31.950 LIB libspdk_event_nvmf.a 00:05:31.950 SYMLINK libspdk_event_scsi.so 00:05:31.950 SO libspdk_event_nvmf.so.6.0 00:05:31.950 SYMLINK libspdk_event_nvmf.so 00:05:32.210 CC module/event/subsystems/iscsi/iscsi.o 00:05:32.210 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:32.470 LIB libspdk_event_vhost_scsi.a 00:05:32.470 LIB libspdk_event_iscsi.a 00:05:32.470 SO libspdk_event_vhost_scsi.so.3.0 00:05:32.470 SO libspdk_event_iscsi.so.6.0 00:05:32.470 SYMLINK libspdk_event_iscsi.so 00:05:32.470 SYMLINK libspdk_event_vhost_scsi.so 00:05:32.730 SO libspdk.so.6.0 00:05:32.730 SYMLINK libspdk.so 00:05:32.990 CC app/spdk_lspci/spdk_lspci.o 00:05:32.990 CXX app/trace/trace.o 00:05:32.990 CC app/trace_record/trace_record.o 00:05:32.990 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:33.249 CC app/nvmf_tgt/nvmf_main.o 00:05:33.249 CC app/iscsi_tgt/iscsi_tgt.o 00:05:33.249 CC app/spdk_tgt/spdk_tgt.o 00:05:33.249 CC examples/util/zipf/zipf.o 00:05:33.249 CC examples/ioat/perf/perf.o 00:05:33.249 CC test/thread/poller_perf/poller_perf.o 00:05:33.249 LINK spdk_lspci 00:05:33.249 LINK interrupt_tgt 00:05:33.249 LINK nvmf_tgt 00:05:33.249 LINK iscsi_tgt 00:05:33.249 LINK zipf 00:05:33.249 LINK spdk_tgt 00:05:33.249 LINK poller_perf 00:05:33.249 LINK spdk_trace_record 00:05:33.509 LINK ioat_perf 00:05:33.509 LINK spdk_trace 00:05:33.509 CC app/spdk_nvme_perf/perf.o 00:05:33.509 CC app/spdk_nvme_identify/identify.o 00:05:33.509 CC app/spdk_nvme_discover/discovery_aer.o 00:05:33.509 CC app/spdk_top/spdk_top.o 00:05:33.509 CC examples/ioat/verify/verify.o 00:05:33.769 CC app/spdk_dd/spdk_dd.o 00:05:33.769 CC test/dma/test_dma/test_dma.o 00:05:33.769 CC app/fio/nvme/fio_plugin.o 00:05:33.769 CC examples/thread/thread/thread_ex.o 00:05:33.769 LINK spdk_nvme_discover 00:05:33.769 LINK verify 00:05:33.769 CC examples/sock/hello_world/hello_sock.o 00:05:34.029 LINK spdk_dd 00:05:34.029 LINK thread 00:05:34.029 CC app/vhost/vhost.o 00:05:34.029 CC examples/vmd/lsvmd/lsvmd.o 00:05:34.029 LINK hello_sock 00:05:34.288 LINK test_dma 00:05:34.288 LINK lsvmd 00:05:34.288 LINK vhost 00:05:34.288 LINK spdk_nvme 00:05:34.288 TEST_HEADER include/spdk/accel.h 00:05:34.288 TEST_HEADER include/spdk/accel_module.h 00:05:34.288 TEST_HEADER include/spdk/assert.h 00:05:34.288 TEST_HEADER include/spdk/barrier.h 00:05:34.288 TEST_HEADER include/spdk/base64.h 00:05:34.288 TEST_HEADER include/spdk/bdev.h 00:05:34.288 TEST_HEADER include/spdk/bdev_module.h 00:05:34.288 TEST_HEADER include/spdk/bdev_zone.h 00:05:34.288 TEST_HEADER include/spdk/bit_array.h 00:05:34.288 TEST_HEADER include/spdk/bit_pool.h 00:05:34.288 TEST_HEADER include/spdk/blob_bdev.h 00:05:34.288 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:34.288 TEST_HEADER include/spdk/blobfs.h 00:05:34.288 TEST_HEADER include/spdk/blob.h 00:05:34.288 TEST_HEADER include/spdk/conf.h 00:05:34.288 TEST_HEADER include/spdk/config.h 00:05:34.288 TEST_HEADER include/spdk/cpuset.h 00:05:34.288 TEST_HEADER include/spdk/crc16.h 00:05:34.288 TEST_HEADER include/spdk/crc32.h 00:05:34.288 TEST_HEADER include/spdk/crc64.h 00:05:34.288 TEST_HEADER include/spdk/dif.h 00:05:34.288 TEST_HEADER include/spdk/dma.h 00:05:34.288 TEST_HEADER include/spdk/endian.h 00:05:34.288 TEST_HEADER include/spdk/env_dpdk.h 00:05:34.288 TEST_HEADER include/spdk/env.h 00:05:34.288 TEST_HEADER include/spdk/event.h 00:05:34.289 TEST_HEADER include/spdk/fd_group.h 00:05:34.289 TEST_HEADER include/spdk/fd.h 00:05:34.289 TEST_HEADER include/spdk/file.h 00:05:34.289 TEST_HEADER include/spdk/fsdev.h 00:05:34.289 TEST_HEADER include/spdk/fsdev_module.h 00:05:34.289 TEST_HEADER include/spdk/ftl.h 00:05:34.289 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:34.289 TEST_HEADER include/spdk/gpt_spec.h 00:05:34.289 TEST_HEADER include/spdk/hexlify.h 00:05:34.289 TEST_HEADER include/spdk/histogram_data.h 00:05:34.289 CC examples/idxd/perf/perf.o 00:05:34.289 TEST_HEADER include/spdk/idxd.h 00:05:34.289 TEST_HEADER include/spdk/idxd_spec.h 00:05:34.289 TEST_HEADER include/spdk/init.h 00:05:34.289 TEST_HEADER include/spdk/ioat.h 00:05:34.289 TEST_HEADER include/spdk/ioat_spec.h 00:05:34.289 TEST_HEADER include/spdk/iscsi_spec.h 00:05:34.289 TEST_HEADER include/spdk/json.h 00:05:34.289 TEST_HEADER include/spdk/jsonrpc.h 00:05:34.289 TEST_HEADER include/spdk/keyring.h 00:05:34.289 TEST_HEADER include/spdk/keyring_module.h 00:05:34.289 TEST_HEADER include/spdk/likely.h 00:05:34.289 TEST_HEADER include/spdk/log.h 00:05:34.289 TEST_HEADER include/spdk/lvol.h 00:05:34.289 TEST_HEADER include/spdk/md5.h 00:05:34.289 TEST_HEADER include/spdk/memory.h 00:05:34.289 CC test/app/bdev_svc/bdev_svc.o 00:05:34.289 TEST_HEADER include/spdk/mmio.h 00:05:34.289 TEST_HEADER include/spdk/nbd.h 00:05:34.289 TEST_HEADER include/spdk/net.h 00:05:34.289 TEST_HEADER include/spdk/notify.h 00:05:34.289 TEST_HEADER include/spdk/nvme.h 00:05:34.289 TEST_HEADER include/spdk/nvme_intel.h 00:05:34.289 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:34.289 LINK spdk_nvme_perf 00:05:34.289 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:34.289 TEST_HEADER include/spdk/nvme_spec.h 00:05:34.289 TEST_HEADER include/spdk/nvme_zns.h 00:05:34.289 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:34.289 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:34.289 TEST_HEADER include/spdk/nvmf.h 00:05:34.289 TEST_HEADER include/spdk/nvmf_spec.h 00:05:34.289 TEST_HEADER include/spdk/nvmf_transport.h 00:05:34.289 TEST_HEADER include/spdk/opal.h 00:05:34.289 TEST_HEADER include/spdk/opal_spec.h 00:05:34.289 TEST_HEADER include/spdk/pci_ids.h 00:05:34.289 TEST_HEADER include/spdk/pipe.h 00:05:34.548 TEST_HEADER include/spdk/queue.h 00:05:34.548 TEST_HEADER include/spdk/reduce.h 00:05:34.548 TEST_HEADER include/spdk/rpc.h 00:05:34.548 TEST_HEADER include/spdk/scheduler.h 00:05:34.548 TEST_HEADER include/spdk/scsi.h 00:05:34.548 TEST_HEADER include/spdk/scsi_spec.h 00:05:34.548 TEST_HEADER include/spdk/sock.h 00:05:34.548 TEST_HEADER include/spdk/stdinc.h 00:05:34.548 TEST_HEADER include/spdk/string.h 00:05:34.548 TEST_HEADER include/spdk/thread.h 00:05:34.548 TEST_HEADER include/spdk/trace.h 00:05:34.548 TEST_HEADER include/spdk/trace_parser.h 00:05:34.548 CC app/fio/bdev/fio_plugin.o 00:05:34.548 TEST_HEADER include/spdk/tree.h 00:05:34.548 TEST_HEADER include/spdk/ublk.h 00:05:34.548 TEST_HEADER include/spdk/util.h 00:05:34.548 TEST_HEADER include/spdk/uuid.h 00:05:34.549 CC examples/vmd/led/led.o 00:05:34.549 TEST_HEADER include/spdk/version.h 00:05:34.549 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:34.549 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:34.549 TEST_HEADER include/spdk/vhost.h 00:05:34.549 TEST_HEADER include/spdk/vmd.h 00:05:34.549 TEST_HEADER include/spdk/xor.h 00:05:34.549 TEST_HEADER include/spdk/zipf.h 00:05:34.549 CXX test/cpp_headers/accel.o 00:05:34.549 LINK spdk_nvme_identify 00:05:34.549 LINK spdk_top 00:05:34.549 LINK bdev_svc 00:05:34.549 CC test/event/event_perf/event_perf.o 00:05:34.549 CC test/env/mem_callbacks/mem_callbacks.o 00:05:34.549 LINK led 00:05:34.549 CXX test/cpp_headers/accel_module.o 00:05:34.549 CC test/event/reactor/reactor.o 00:05:34.549 LINK idxd_perf 00:05:34.808 LINK event_perf 00:05:34.808 LINK reactor 00:05:34.808 CXX test/cpp_headers/assert.o 00:05:34.808 LINK mem_callbacks 00:05:34.808 CC examples/nvme/hello_world/hello_world.o 00:05:34.808 CC test/nvme/aer/aer.o 00:05:34.808 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:34.808 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:34.808 CC test/nvme/reset/reset.o 00:05:34.808 CXX test/cpp_headers/barrier.o 00:05:34.808 CC test/nvme/sgl/sgl.o 00:05:34.808 LINK spdk_bdev 00:05:35.069 CC test/event/reactor_perf/reactor_perf.o 00:05:35.069 CC test/env/vtophys/vtophys.o 00:05:35.069 LINK hello_world 00:05:35.069 LINK aer 00:05:35.069 CXX test/cpp_headers/base64.o 00:05:35.069 LINK reactor_perf 00:05:35.069 LINK vtophys 00:05:35.069 CC test/nvme/e2edp/nvme_dp.o 00:05:35.069 LINK reset 00:05:35.069 LINK sgl 00:05:35.329 CXX test/cpp_headers/bdev.o 00:05:35.329 CC examples/nvme/reconnect/reconnect.o 00:05:35.329 LINK nvme_fuzz 00:05:35.329 CC test/nvme/overhead/overhead.o 00:05:35.329 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:35.329 CC test/event/app_repeat/app_repeat.o 00:05:35.329 CC test/nvme/err_injection/err_injection.o 00:05:35.329 LINK nvme_dp 00:05:35.329 CXX test/cpp_headers/bdev_module.o 00:05:35.329 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:35.329 CXX test/cpp_headers/bdev_zone.o 00:05:35.588 LINK env_dpdk_post_init 00:05:35.588 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:35.588 LINK app_repeat 00:05:35.588 LINK err_injection 00:05:35.588 LINK overhead 00:05:35.588 LINK reconnect 00:05:35.588 CXX test/cpp_headers/bit_array.o 00:05:35.588 CC test/rpc_client/rpc_client_test.o 00:05:35.848 CC test/env/memory/memory_ut.o 00:05:35.848 CXX test/cpp_headers/bit_pool.o 00:05:35.848 CC test/nvme/startup/startup.o 00:05:35.848 CC test/env/pci/pci_ut.o 00:05:35.848 CC test/accel/dif/dif.o 00:05:35.848 CC test/event/scheduler/scheduler.o 00:05:35.848 LINK rpc_client_test 00:05:35.848 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:35.848 LINK vhost_fuzz 00:05:35.848 CXX test/cpp_headers/blob_bdev.o 00:05:35.848 LINK startup 00:05:36.108 LINK scheduler 00:05:36.108 CXX test/cpp_headers/blobfs_bdev.o 00:05:36.108 CC test/nvme/reserve/reserve.o 00:05:36.108 CC test/blobfs/mkfs/mkfs.o 00:05:36.108 LINK pci_ut 00:05:36.368 CXX test/cpp_headers/blobfs.o 00:05:36.368 CC test/app/histogram_perf/histogram_perf.o 00:05:36.368 CC test/lvol/esnap/esnap.o 00:05:36.368 LINK reserve 00:05:36.368 LINK mkfs 00:05:36.368 LINK nvme_manage 00:05:36.368 CXX test/cpp_headers/blob.o 00:05:36.368 LINK histogram_perf 00:05:36.368 CC test/app/jsoncat/jsoncat.o 00:05:36.628 LINK memory_ut 00:05:36.628 CC test/nvme/simple_copy/simple_copy.o 00:05:36.628 LINK dif 00:05:36.628 CXX test/cpp_headers/conf.o 00:05:36.628 CC examples/nvme/arbitration/arbitration.o 00:05:36.628 CC test/app/stub/stub.o 00:05:36.628 LINK jsoncat 00:05:36.628 LINK iscsi_fuzz 00:05:36.628 CC examples/nvme/hotplug/hotplug.o 00:05:36.628 CXX test/cpp_headers/config.o 00:05:36.628 CXX test/cpp_headers/cpuset.o 00:05:36.628 LINK stub 00:05:36.628 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:36.887 LINK simple_copy 00:05:36.887 CC examples/nvme/abort/abort.o 00:05:36.887 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:36.887 CXX test/cpp_headers/crc16.o 00:05:36.887 LINK hotplug 00:05:36.887 LINK arbitration 00:05:36.887 LINK cmb_copy 00:05:36.887 LINK pmr_persistence 00:05:36.887 CXX test/cpp_headers/crc32.o 00:05:36.887 CC test/nvme/connect_stress/connect_stress.o 00:05:36.887 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:36.887 CXX test/cpp_headers/crc64.o 00:05:37.147 CC test/bdev/bdevio/bdevio.o 00:05:37.147 CC test/nvme/boot_partition/boot_partition.o 00:05:37.147 CXX test/cpp_headers/dif.o 00:05:37.147 CC test/nvme/compliance/nvme_compliance.o 00:05:37.147 LINK connect_stress 00:05:37.147 LINK abort 00:05:37.147 CC test/nvme/fused_ordering/fused_ordering.o 00:05:37.147 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:37.147 LINK boot_partition 00:05:37.147 LINK hello_fsdev 00:05:37.407 CXX test/cpp_headers/dma.o 00:05:37.407 CXX test/cpp_headers/endian.o 00:05:37.407 CC test/nvme/fdp/fdp.o 00:05:37.407 LINK doorbell_aers 00:05:37.407 LINK fused_ordering 00:05:37.407 LINK bdevio 00:05:37.407 LINK nvme_compliance 00:05:37.407 CC examples/accel/perf/accel_perf.o 00:05:37.407 CC test/nvme/cuse/cuse.o 00:05:37.407 CXX test/cpp_headers/env_dpdk.o 00:05:37.667 CXX test/cpp_headers/env.o 00:05:37.667 CC examples/blob/hello_world/hello_blob.o 00:05:37.667 CXX test/cpp_headers/event.o 00:05:37.667 CXX test/cpp_headers/fd_group.o 00:05:37.667 CXX test/cpp_headers/fd.o 00:05:37.667 CC examples/blob/cli/blobcli.o 00:05:37.667 LINK fdp 00:05:37.667 CXX test/cpp_headers/file.o 00:05:37.667 CXX test/cpp_headers/fsdev.o 00:05:37.667 CXX test/cpp_headers/fsdev_module.o 00:05:37.927 CXX test/cpp_headers/ftl.o 00:05:37.927 LINK hello_blob 00:05:37.927 CXX test/cpp_headers/fuse_dispatcher.o 00:05:37.927 CXX test/cpp_headers/gpt_spec.o 00:05:37.927 CXX test/cpp_headers/hexlify.o 00:05:37.927 CXX test/cpp_headers/histogram_data.o 00:05:37.927 CXX test/cpp_headers/idxd.o 00:05:37.927 CXX test/cpp_headers/idxd_spec.o 00:05:37.927 LINK accel_perf 00:05:37.927 CXX test/cpp_headers/init.o 00:05:37.927 CXX test/cpp_headers/ioat.o 00:05:38.186 CXX test/cpp_headers/ioat_spec.o 00:05:38.186 CXX test/cpp_headers/iscsi_spec.o 00:05:38.186 CXX test/cpp_headers/json.o 00:05:38.186 CXX test/cpp_headers/jsonrpc.o 00:05:38.186 CXX test/cpp_headers/keyring.o 00:05:38.186 CXX test/cpp_headers/keyring_module.o 00:05:38.186 LINK blobcli 00:05:38.186 CXX test/cpp_headers/likely.o 00:05:38.186 CXX test/cpp_headers/log.o 00:05:38.186 CXX test/cpp_headers/lvol.o 00:05:38.186 CXX test/cpp_headers/md5.o 00:05:38.186 CXX test/cpp_headers/memory.o 00:05:38.186 CC examples/bdev/hello_world/hello_bdev.o 00:05:38.186 CXX test/cpp_headers/mmio.o 00:05:38.446 CXX test/cpp_headers/nbd.o 00:05:38.446 CXX test/cpp_headers/net.o 00:05:38.446 CXX test/cpp_headers/notify.o 00:05:38.446 CC examples/bdev/bdevperf/bdevperf.o 00:05:38.446 CXX test/cpp_headers/nvme.o 00:05:38.446 CXX test/cpp_headers/nvme_intel.o 00:05:38.446 CXX test/cpp_headers/nvme_ocssd.o 00:05:38.446 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:38.446 CXX test/cpp_headers/nvme_spec.o 00:05:38.446 CXX test/cpp_headers/nvme_zns.o 00:05:38.446 LINK hello_bdev 00:05:38.446 CXX test/cpp_headers/nvmf_cmd.o 00:05:38.446 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:38.706 CXX test/cpp_headers/nvmf.o 00:05:38.706 CXX test/cpp_headers/nvmf_spec.o 00:05:38.706 CXX test/cpp_headers/nvmf_transport.o 00:05:38.706 CXX test/cpp_headers/opal.o 00:05:38.706 LINK cuse 00:05:38.706 CXX test/cpp_headers/opal_spec.o 00:05:38.706 CXX test/cpp_headers/pci_ids.o 00:05:38.706 CXX test/cpp_headers/pipe.o 00:05:38.706 CXX test/cpp_headers/queue.o 00:05:38.706 CXX test/cpp_headers/reduce.o 00:05:38.706 CXX test/cpp_headers/rpc.o 00:05:38.966 CXX test/cpp_headers/scheduler.o 00:05:38.966 CXX test/cpp_headers/scsi.o 00:05:38.966 CXX test/cpp_headers/scsi_spec.o 00:05:38.966 CXX test/cpp_headers/sock.o 00:05:38.966 CXX test/cpp_headers/stdinc.o 00:05:38.966 CXX test/cpp_headers/string.o 00:05:38.966 CXX test/cpp_headers/thread.o 00:05:38.966 CXX test/cpp_headers/trace.o 00:05:38.966 CXX test/cpp_headers/trace_parser.o 00:05:38.966 CXX test/cpp_headers/tree.o 00:05:38.966 CXX test/cpp_headers/ublk.o 00:05:38.966 CXX test/cpp_headers/util.o 00:05:38.966 CXX test/cpp_headers/uuid.o 00:05:38.966 CXX test/cpp_headers/version.o 00:05:38.966 CXX test/cpp_headers/vfio_user_pci.o 00:05:39.226 CXX test/cpp_headers/vfio_user_spec.o 00:05:39.226 CXX test/cpp_headers/vhost.o 00:05:39.226 CXX test/cpp_headers/vmd.o 00:05:39.226 CXX test/cpp_headers/xor.o 00:05:39.226 CXX test/cpp_headers/zipf.o 00:05:39.226 LINK bdevperf 00:05:39.794 CC examples/nvmf/nvmf/nvmf.o 00:05:40.364 LINK nvmf 00:05:41.745 LINK esnap 00:05:42.314 00:05:42.314 real 1m21.825s 00:05:42.314 user 5m58.357s 00:05:42.314 sys 1m13.214s 00:05:42.314 02:20:00 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:42.314 02:20:00 make -- common/autotest_common.sh@10 -- $ set +x 00:05:42.314 ************************************ 00:05:42.314 END TEST make 00:05:42.314 ************************************ 00:05:42.314 02:20:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:42.314 02:20:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:42.314 02:20:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:42.314 02:20:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:42.314 02:20:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:42.314 02:20:00 -- pm/common@44 -- $ pid=6195 00:05:42.314 02:20:00 -- pm/common@50 -- $ kill -TERM 6195 00:05:42.314 02:20:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:42.314 02:20:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:42.314 02:20:00 -- pm/common@44 -- $ pid=6196 00:05:42.314 02:20:00 -- pm/common@50 -- $ kill -TERM 6196 00:05:42.314 02:20:00 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:42.314 02:20:00 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:42.314 02:20:00 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:42.314 02:20:00 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:42.314 02:20:00 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.314 02:20:00 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.314 02:20:00 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.314 02:20:00 -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.314 02:20:00 -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.314 02:20:00 -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.314 02:20:00 -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.314 02:20:00 -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.314 02:20:00 -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.314 02:20:00 -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.314 02:20:00 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.315 02:20:00 -- scripts/common.sh@344 -- # case "$op" in 00:05:42.315 02:20:00 -- scripts/common.sh@345 -- # : 1 00:05:42.315 02:20:00 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.315 02:20:00 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.315 02:20:00 -- scripts/common.sh@365 -- # decimal 1 00:05:42.315 02:20:00 -- scripts/common.sh@353 -- # local d=1 00:05:42.315 02:20:00 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.315 02:20:00 -- scripts/common.sh@355 -- # echo 1 00:05:42.315 02:20:00 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.315 02:20:00 -- scripts/common.sh@366 -- # decimal 2 00:05:42.315 02:20:00 -- scripts/common.sh@353 -- # local d=2 00:05:42.315 02:20:00 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.315 02:20:00 -- scripts/common.sh@355 -- # echo 2 00:05:42.315 02:20:00 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.315 02:20:00 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.315 02:20:00 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.315 02:20:00 -- scripts/common.sh@368 -- # return 0 00:05:42.315 02:20:00 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.315 02:20:00 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:42.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.315 --rc genhtml_branch_coverage=1 00:05:42.315 --rc genhtml_function_coverage=1 00:05:42.315 --rc genhtml_legend=1 00:05:42.315 --rc geninfo_all_blocks=1 00:05:42.315 --rc geninfo_unexecuted_blocks=1 00:05:42.315 00:05:42.315 ' 00:05:42.315 02:20:00 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:42.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.315 --rc genhtml_branch_coverage=1 00:05:42.315 --rc genhtml_function_coverage=1 00:05:42.315 --rc genhtml_legend=1 00:05:42.315 --rc geninfo_all_blocks=1 00:05:42.315 --rc geninfo_unexecuted_blocks=1 00:05:42.315 00:05:42.315 ' 00:05:42.315 02:20:00 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:42.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.315 --rc genhtml_branch_coverage=1 00:05:42.315 --rc genhtml_function_coverage=1 00:05:42.315 --rc genhtml_legend=1 00:05:42.315 --rc geninfo_all_blocks=1 00:05:42.315 --rc geninfo_unexecuted_blocks=1 00:05:42.315 00:05:42.315 ' 00:05:42.315 02:20:00 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:42.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.315 --rc genhtml_branch_coverage=1 00:05:42.315 --rc genhtml_function_coverage=1 00:05:42.315 --rc genhtml_legend=1 00:05:42.315 --rc geninfo_all_blocks=1 00:05:42.315 --rc geninfo_unexecuted_blocks=1 00:05:42.315 00:05:42.315 ' 00:05:42.315 02:20:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:42.315 02:20:00 -- nvmf/common.sh@7 -- # uname -s 00:05:42.315 02:20:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:42.315 02:20:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:42.315 02:20:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:42.315 02:20:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:42.315 02:20:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:42.315 02:20:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:42.315 02:20:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:42.315 02:20:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:42.315 02:20:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:42.315 02:20:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:42.575 02:20:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b3f98cd3-51b2-436d-a29d-feb56f34e045 00:05:42.575 02:20:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=b3f98cd3-51b2-436d-a29d-feb56f34e045 00:05:42.575 02:20:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:42.575 02:20:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:42.575 02:20:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:42.575 02:20:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:42.575 02:20:01 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:42.575 02:20:01 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:42.575 02:20:01 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.575 02:20:01 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.575 02:20:01 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.575 02:20:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.575 02:20:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.575 02:20:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.575 02:20:01 -- paths/export.sh@5 -- # export PATH 00:05:42.575 02:20:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.575 02:20:01 -- nvmf/common.sh@51 -- # : 0 00:05:42.575 02:20:01 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:42.575 02:20:01 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:42.575 02:20:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:42.575 02:20:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:42.575 02:20:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:42.575 02:20:01 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:42.575 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:42.575 02:20:01 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:42.575 02:20:01 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:42.575 02:20:01 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:42.575 02:20:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:42.575 02:20:01 -- spdk/autotest.sh@32 -- # uname -s 00:05:42.575 02:20:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:42.575 02:20:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:42.575 02:20:01 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:42.575 02:20:01 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:42.575 02:20:01 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:42.575 02:20:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:42.575 02:20:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:42.575 02:20:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:42.575 02:20:01 -- spdk/autotest.sh@48 -- # udevadm_pid=66623 00:05:42.575 02:20:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:42.575 02:20:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:42.575 02:20:01 -- pm/common@17 -- # local monitor 00:05:42.575 02:20:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:42.575 02:20:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:42.575 02:20:01 -- pm/common@25 -- # sleep 1 00:05:42.575 02:20:01 -- pm/common@21 -- # date +%s 00:05:42.575 02:20:01 -- pm/common@21 -- # date +%s 00:05:42.575 02:20:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728786001 00:05:42.575 02:20:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728786001 00:05:42.575 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728786001_collect-cpu-load.pm.log 00:05:42.575 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728786001_collect-vmstat.pm.log 00:05:43.528 02:20:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:43.528 02:20:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:43.528 02:20:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:43.528 02:20:02 -- common/autotest_common.sh@10 -- # set +x 00:05:43.528 02:20:02 -- spdk/autotest.sh@59 -- # create_test_list 00:05:43.528 02:20:02 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:43.528 02:20:02 -- common/autotest_common.sh@10 -- # set +x 00:05:43.528 02:20:02 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:43.528 02:20:02 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:43.528 02:20:02 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:43.528 02:20:02 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:43.528 02:20:02 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:43.528 02:20:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:43.528 02:20:02 -- common/autotest_common.sh@1455 -- # uname 00:05:43.528 02:20:02 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:43.528 02:20:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:43.528 02:20:02 -- common/autotest_common.sh@1475 -- # uname 00:05:43.528 02:20:02 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:43.528 02:20:02 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:43.528 02:20:02 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:43.788 lcov: LCOV version 1.15 00:05:43.788 02:20:02 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:58.684 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:58.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:13.582 02:20:31 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:13.582 02:20:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.582 02:20:31 -- common/autotest_common.sh@10 -- # set +x 00:06:13.582 02:20:31 -- spdk/autotest.sh@78 -- # rm -f 00:06:13.582 02:20:31 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:13.582 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:13.582 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:13.582 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:13.582 02:20:31 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:13.582 02:20:31 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:13.582 02:20:31 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:13.582 02:20:31 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:13.582 02:20:31 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:13.582 02:20:31 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:13.582 02:20:31 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:13.582 02:20:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:13.582 02:20:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:13.582 02:20:32 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:13.582 02:20:32 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:13.582 02:20:32 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:13.582 02:20:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:13.582 02:20:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:13.582 02:20:32 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:13.582 02:20:32 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:06:13.582 02:20:32 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:06:13.582 02:20:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:13.582 02:20:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:13.582 02:20:32 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:13.582 02:20:32 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:06:13.582 02:20:32 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:06:13.582 02:20:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:13.582 02:20:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:13.582 02:20:32 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:13.582 02:20:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:13.582 02:20:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:13.582 02:20:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:13.582 02:20:32 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:13.582 02:20:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:13.582 No valid GPT data, bailing 00:06:13.582 02:20:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:13.582 02:20:32 -- scripts/common.sh@394 -- # pt= 00:06:13.582 02:20:32 -- scripts/common.sh@395 -- # return 1 00:06:13.582 02:20:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:13.582 1+0 records in 00:06:13.582 1+0 records out 00:06:13.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00682083 s, 154 MB/s 00:06:13.582 02:20:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:13.582 02:20:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:13.582 02:20:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:13.582 02:20:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:13.582 02:20:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:13.582 No valid GPT data, bailing 00:06:13.582 02:20:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:13.582 02:20:32 -- scripts/common.sh@394 -- # pt= 00:06:13.582 02:20:32 -- scripts/common.sh@395 -- # return 1 00:06:13.582 02:20:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:13.582 1+0 records in 00:06:13.582 1+0 records out 00:06:13.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00650653 s, 161 MB/s 00:06:13.582 02:20:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:13.582 02:20:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:13.582 02:20:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:13.582 02:20:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:13.582 02:20:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:13.582 No valid GPT data, bailing 00:06:13.582 02:20:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:13.582 02:20:32 -- scripts/common.sh@394 -- # pt= 00:06:13.582 02:20:32 -- scripts/common.sh@395 -- # return 1 00:06:13.582 02:20:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:13.843 1+0 records in 00:06:13.843 1+0 records out 00:06:13.843 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0065874 s, 159 MB/s 00:06:13.843 02:20:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:13.843 02:20:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:13.843 02:20:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:13.843 02:20:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:13.843 02:20:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:13.843 No valid GPT data, bailing 00:06:13.843 02:20:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:13.843 02:20:32 -- scripts/common.sh@394 -- # pt= 00:06:13.843 02:20:32 -- scripts/common.sh@395 -- # return 1 00:06:13.843 02:20:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:13.843 1+0 records in 00:06:13.843 1+0 records out 00:06:13.843 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00661724 s, 158 MB/s 00:06:13.843 02:20:32 -- spdk/autotest.sh@105 -- # sync 00:06:14.103 02:20:32 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:14.103 02:20:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:14.103 02:20:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:17.396 02:20:35 -- spdk/autotest.sh@111 -- # uname -s 00:06:17.396 02:20:35 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:17.396 02:20:35 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:17.396 02:20:35 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:17.655 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:17.655 Hugepages 00:06:17.655 node hugesize free / total 00:06:17.916 node0 1048576kB 0 / 0 00:06:17.916 node0 2048kB 0 / 0 00:06:17.916 00:06:17.916 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:17.916 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:17.916 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:18.175 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:18.175 02:20:36 -- spdk/autotest.sh@117 -- # uname -s 00:06:18.175 02:20:36 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:18.175 02:20:36 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:18.175 02:20:36 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:19.115 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:19.115 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:19.115 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:19.115 02:20:37 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:20.053 02:20:38 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:20.053 02:20:38 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:20.054 02:20:38 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:20.054 02:20:38 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:20.054 02:20:38 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:20.054 02:20:38 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:20.054 02:20:38 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:20.054 02:20:38 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:20.054 02:20:38 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:20.314 02:20:38 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:20.314 02:20:38 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:20.314 02:20:38 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:20.884 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:20.884 Waiting for block devices as requested 00:06:20.884 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:20.884 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:20.884 02:20:39 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:20.884 02:20:39 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:20.884 02:20:39 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:20.884 02:20:39 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:20.884 02:20:39 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:20.884 02:20:39 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:20.884 02:20:39 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:20.884 02:20:39 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:21.145 02:20:39 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:21.145 02:20:39 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:21.145 02:20:39 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:21.145 02:20:39 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:21.145 02:20:39 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:21.145 02:20:39 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:21.145 02:20:39 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:21.145 02:20:39 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:21.145 02:20:39 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:21.145 02:20:39 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:21.145 02:20:39 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:21.145 02:20:39 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:21.145 02:20:39 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:21.145 02:20:39 -- common/autotest_common.sh@1541 -- # continue 00:06:21.145 02:20:39 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:21.145 02:20:39 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:21.145 02:20:39 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:21.145 02:20:39 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:21.145 02:20:39 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:21.145 02:20:39 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:21.145 02:20:39 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:21.145 02:20:39 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:21.145 02:20:39 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:21.145 02:20:39 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:21.145 02:20:39 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:21.145 02:20:39 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:21.145 02:20:39 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:21.145 02:20:39 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:21.145 02:20:39 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:21.145 02:20:39 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:21.145 02:20:39 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:21.145 02:20:39 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:21.145 02:20:39 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:21.145 02:20:39 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:21.145 02:20:39 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:21.145 02:20:39 -- common/autotest_common.sh@1541 -- # continue 00:06:21.145 02:20:39 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:21.145 02:20:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:21.145 02:20:39 -- common/autotest_common.sh@10 -- # set +x 00:06:21.145 02:20:39 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:21.145 02:20:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:21.145 02:20:39 -- common/autotest_common.sh@10 -- # set +x 00:06:21.145 02:20:39 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:22.086 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:22.086 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:22.086 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:22.086 02:20:40 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:22.086 02:20:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:22.086 02:20:40 -- common/autotest_common.sh@10 -- # set +x 00:06:22.346 02:20:40 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:22.346 02:20:40 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:22.346 02:20:40 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:22.346 02:20:40 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:22.346 02:20:40 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:22.346 02:20:40 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:22.346 02:20:40 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:22.346 02:20:40 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:22.346 02:20:40 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:22.346 02:20:40 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:22.346 02:20:40 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:22.346 02:20:40 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:22.346 02:20:40 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:22.346 02:20:40 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:22.346 02:20:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:22.346 02:20:40 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:22.346 02:20:40 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:22.346 02:20:40 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:22.346 02:20:40 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:22.346 02:20:40 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:22.346 02:20:40 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:22.346 02:20:40 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:22.346 02:20:40 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:22.346 02:20:40 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:22.346 02:20:40 -- common/autotest_common.sh@1570 -- # return 0 00:06:22.346 02:20:40 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:22.346 02:20:40 -- common/autotest_common.sh@1578 -- # return 0 00:06:22.346 02:20:40 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:22.346 02:20:40 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:22.346 02:20:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:22.346 02:20:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:22.346 02:20:40 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:22.346 02:20:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.346 02:20:40 -- common/autotest_common.sh@10 -- # set +x 00:06:22.346 02:20:40 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:22.346 02:20:40 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:22.346 02:20:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.346 02:20:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.346 02:20:40 -- common/autotest_common.sh@10 -- # set +x 00:06:22.346 ************************************ 00:06:22.346 START TEST env 00:06:22.346 ************************************ 00:06:22.346 02:20:40 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:22.606 * Looking for test storage... 00:06:22.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:22.606 02:20:41 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:22.606 02:20:41 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:22.606 02:20:41 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:22.606 02:20:41 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:22.606 02:20:41 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.606 02:20:41 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.606 02:20:41 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.606 02:20:41 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.606 02:20:41 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.606 02:20:41 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.606 02:20:41 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.606 02:20:41 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.606 02:20:41 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.606 02:20:41 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.607 02:20:41 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.607 02:20:41 env -- scripts/common.sh@344 -- # case "$op" in 00:06:22.607 02:20:41 env -- scripts/common.sh@345 -- # : 1 00:06:22.607 02:20:41 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.607 02:20:41 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.607 02:20:41 env -- scripts/common.sh@365 -- # decimal 1 00:06:22.607 02:20:41 env -- scripts/common.sh@353 -- # local d=1 00:06:22.607 02:20:41 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.607 02:20:41 env -- scripts/common.sh@355 -- # echo 1 00:06:22.607 02:20:41 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.607 02:20:41 env -- scripts/common.sh@366 -- # decimal 2 00:06:22.607 02:20:41 env -- scripts/common.sh@353 -- # local d=2 00:06:22.607 02:20:41 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.607 02:20:41 env -- scripts/common.sh@355 -- # echo 2 00:06:22.607 02:20:41 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.607 02:20:41 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.607 02:20:41 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.607 02:20:41 env -- scripts/common.sh@368 -- # return 0 00:06:22.607 02:20:41 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.607 02:20:41 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:22.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.607 --rc genhtml_branch_coverage=1 00:06:22.607 --rc genhtml_function_coverage=1 00:06:22.607 --rc genhtml_legend=1 00:06:22.607 --rc geninfo_all_blocks=1 00:06:22.607 --rc geninfo_unexecuted_blocks=1 00:06:22.607 00:06:22.607 ' 00:06:22.607 02:20:41 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:22.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.607 --rc genhtml_branch_coverage=1 00:06:22.607 --rc genhtml_function_coverage=1 00:06:22.607 --rc genhtml_legend=1 00:06:22.607 --rc geninfo_all_blocks=1 00:06:22.607 --rc geninfo_unexecuted_blocks=1 00:06:22.607 00:06:22.607 ' 00:06:22.607 02:20:41 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:22.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.607 --rc genhtml_branch_coverage=1 00:06:22.607 --rc genhtml_function_coverage=1 00:06:22.607 --rc genhtml_legend=1 00:06:22.607 --rc geninfo_all_blocks=1 00:06:22.607 --rc geninfo_unexecuted_blocks=1 00:06:22.607 00:06:22.607 ' 00:06:22.607 02:20:41 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:22.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.607 --rc genhtml_branch_coverage=1 00:06:22.607 --rc genhtml_function_coverage=1 00:06:22.607 --rc genhtml_legend=1 00:06:22.607 --rc geninfo_all_blocks=1 00:06:22.607 --rc geninfo_unexecuted_blocks=1 00:06:22.607 00:06:22.607 ' 00:06:22.607 02:20:41 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:22.607 02:20:41 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.607 02:20:41 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.607 02:20:41 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.607 ************************************ 00:06:22.607 START TEST env_memory 00:06:22.607 ************************************ 00:06:22.607 02:20:41 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:22.607 00:06:22.607 00:06:22.607 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.607 http://cunit.sourceforge.net/ 00:06:22.607 00:06:22.607 00:06:22.607 Suite: memory 00:06:22.607 Test: alloc and free memory map ...[2024-10-13 02:20:41.231133] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:22.607 passed 00:06:22.607 Test: mem map translation ...[2024-10-13 02:20:41.273050] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:22.607 [2024-10-13 02:20:41.273098] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:22.607 [2024-10-13 02:20:41.273160] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:22.607 [2024-10-13 02:20:41.273177] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:22.867 passed 00:06:22.867 Test: mem map registration ...[2024-10-13 02:20:41.339061] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:22.867 [2024-10-13 02:20:41.339108] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:22.867 passed 00:06:22.867 Test: mem map adjacent registrations ...passed 00:06:22.867 00:06:22.867 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.867 suites 1 1 n/a 0 0 00:06:22.867 tests 4 4 4 0 0 00:06:22.867 asserts 152 152 152 0 n/a 00:06:22.867 00:06:22.867 Elapsed time = 0.233 seconds 00:06:22.867 00:06:22.867 real 0m0.285s 00:06:22.867 user 0m0.248s 00:06:22.867 sys 0m0.026s 00:06:22.867 02:20:41 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.867 02:20:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:22.867 ************************************ 00:06:22.867 END TEST env_memory 00:06:22.867 ************************************ 00:06:22.867 02:20:41 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:22.867 02:20:41 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.867 02:20:41 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.867 02:20:41 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.867 ************************************ 00:06:22.867 START TEST env_vtophys 00:06:22.867 ************************************ 00:06:22.867 02:20:41 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:23.127 EAL: lib.eal log level changed from notice to debug 00:06:23.127 EAL: Detected lcore 0 as core 0 on socket 0 00:06:23.127 EAL: Detected lcore 1 as core 0 on socket 0 00:06:23.127 EAL: Detected lcore 2 as core 0 on socket 0 00:06:23.127 EAL: Detected lcore 3 as core 0 on socket 0 00:06:23.127 EAL: Detected lcore 4 as core 0 on socket 0 00:06:23.127 EAL: Detected lcore 5 as core 0 on socket 0 00:06:23.127 EAL: Detected lcore 6 as core 0 on socket 0 00:06:23.127 EAL: Detected lcore 7 as core 0 on socket 0 00:06:23.127 EAL: Detected lcore 8 as core 0 on socket 0 00:06:23.127 EAL: Detected lcore 9 as core 0 on socket 0 00:06:23.127 EAL: Maximum logical cores by configuration: 128 00:06:23.127 EAL: Detected CPU lcores: 10 00:06:23.127 EAL: Detected NUMA nodes: 1 00:06:23.127 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:23.127 EAL: Detected shared linkage of DPDK 00:06:23.127 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:23.127 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:23.127 EAL: Registered [vdev] bus. 00:06:23.127 EAL: bus.vdev log level changed from disabled to notice 00:06:23.127 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:23.127 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:23.127 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:23.127 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:23.127 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:23.127 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:23.127 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:23.127 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:23.127 EAL: No shared files mode enabled, IPC will be disabled 00:06:23.127 EAL: No shared files mode enabled, IPC is disabled 00:06:23.127 EAL: Selected IOVA mode 'PA' 00:06:23.127 EAL: Probing VFIO support... 00:06:23.127 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:23.127 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:23.127 EAL: Ask a virtual area of 0x2e000 bytes 00:06:23.127 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:23.127 EAL: Setting up physically contiguous memory... 00:06:23.127 EAL: Setting maximum number of open files to 524288 00:06:23.127 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:23.127 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:23.127 EAL: Ask a virtual area of 0x61000 bytes 00:06:23.127 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:23.127 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:23.127 EAL: Ask a virtual area of 0x400000000 bytes 00:06:23.127 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:23.127 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:23.127 EAL: Ask a virtual area of 0x61000 bytes 00:06:23.127 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:23.127 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:23.127 EAL: Ask a virtual area of 0x400000000 bytes 00:06:23.127 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:23.127 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:23.127 EAL: Ask a virtual area of 0x61000 bytes 00:06:23.127 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:23.127 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:23.127 EAL: Ask a virtual area of 0x400000000 bytes 00:06:23.127 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:23.127 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:23.127 EAL: Ask a virtual area of 0x61000 bytes 00:06:23.127 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:23.127 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:23.127 EAL: Ask a virtual area of 0x400000000 bytes 00:06:23.127 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:23.127 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:23.127 EAL: Hugepages will be freed exactly as allocated. 00:06:23.127 EAL: No shared files mode enabled, IPC is disabled 00:06:23.127 EAL: No shared files mode enabled, IPC is disabled 00:06:23.127 EAL: TSC frequency is ~2290000 KHz 00:06:23.127 EAL: Main lcore 0 is ready (tid=7f7d08a31a40;cpuset=[0]) 00:06:23.127 EAL: Trying to obtain current memory policy. 00:06:23.127 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.127 EAL: Restoring previous memory policy: 0 00:06:23.127 EAL: request: mp_malloc_sync 00:06:23.127 EAL: No shared files mode enabled, IPC is disabled 00:06:23.127 EAL: Heap on socket 0 was expanded by 2MB 00:06:23.127 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:23.127 EAL: No shared files mode enabled, IPC is disabled 00:06:23.127 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:23.127 EAL: Mem event callback 'spdk:(nil)' registered 00:06:23.127 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:23.127 00:06:23.127 00:06:23.127 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.127 http://cunit.sourceforge.net/ 00:06:23.127 00:06:23.127 00:06:23.127 Suite: components_suite 00:06:23.387 Test: vtophys_malloc_test ...passed 00:06:23.387 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:23.387 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.387 EAL: Restoring previous memory policy: 4 00:06:23.387 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.387 EAL: request: mp_malloc_sync 00:06:23.387 EAL: No shared files mode enabled, IPC is disabled 00:06:23.387 EAL: Heap on socket 0 was expanded by 4MB 00:06:23.387 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.387 EAL: request: mp_malloc_sync 00:06:23.387 EAL: No shared files mode enabled, IPC is disabled 00:06:23.387 EAL: Heap on socket 0 was shrunk by 4MB 00:06:23.387 EAL: Trying to obtain current memory policy. 00:06:23.387 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.387 EAL: Restoring previous memory policy: 4 00:06:23.387 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.387 EAL: request: mp_malloc_sync 00:06:23.387 EAL: No shared files mode enabled, IPC is disabled 00:06:23.387 EAL: Heap on socket 0 was expanded by 6MB 00:06:23.387 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.387 EAL: request: mp_malloc_sync 00:06:23.387 EAL: No shared files mode enabled, IPC is disabled 00:06:23.387 EAL: Heap on socket 0 was shrunk by 6MB 00:06:23.387 EAL: Trying to obtain current memory policy. 00:06:23.387 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.387 EAL: Restoring previous memory policy: 4 00:06:23.387 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.387 EAL: request: mp_malloc_sync 00:06:23.387 EAL: No shared files mode enabled, IPC is disabled 00:06:23.387 EAL: Heap on socket 0 was expanded by 10MB 00:06:23.387 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.387 EAL: request: mp_malloc_sync 00:06:23.387 EAL: No shared files mode enabled, IPC is disabled 00:06:23.387 EAL: Heap on socket 0 was shrunk by 10MB 00:06:23.387 EAL: Trying to obtain current memory policy. 00:06:23.387 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.387 EAL: Restoring previous memory policy: 4 00:06:23.387 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.387 EAL: request: mp_malloc_sync 00:06:23.387 EAL: No shared files mode enabled, IPC is disabled 00:06:23.387 EAL: Heap on socket 0 was expanded by 18MB 00:06:23.387 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.647 EAL: request: mp_malloc_sync 00:06:23.647 EAL: No shared files mode enabled, IPC is disabled 00:06:23.647 EAL: Heap on socket 0 was shrunk by 18MB 00:06:23.647 EAL: Trying to obtain current memory policy. 00:06:23.647 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.647 EAL: Restoring previous memory policy: 4 00:06:23.647 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.647 EAL: request: mp_malloc_sync 00:06:23.647 EAL: No shared files mode enabled, IPC is disabled 00:06:23.647 EAL: Heap on socket 0 was expanded by 34MB 00:06:23.647 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.647 EAL: request: mp_malloc_sync 00:06:23.647 EAL: No shared files mode enabled, IPC is disabled 00:06:23.647 EAL: Heap on socket 0 was shrunk by 34MB 00:06:23.647 EAL: Trying to obtain current memory policy. 00:06:23.647 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.647 EAL: Restoring previous memory policy: 4 00:06:23.647 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.647 EAL: request: mp_malloc_sync 00:06:23.647 EAL: No shared files mode enabled, IPC is disabled 00:06:23.647 EAL: Heap on socket 0 was expanded by 66MB 00:06:23.647 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.647 EAL: request: mp_malloc_sync 00:06:23.647 EAL: No shared files mode enabled, IPC is disabled 00:06:23.647 EAL: Heap on socket 0 was shrunk by 66MB 00:06:23.647 EAL: Trying to obtain current memory policy. 00:06:23.647 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.647 EAL: Restoring previous memory policy: 4 00:06:23.647 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.647 EAL: request: mp_malloc_sync 00:06:23.647 EAL: No shared files mode enabled, IPC is disabled 00:06:23.647 EAL: Heap on socket 0 was expanded by 130MB 00:06:23.647 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.647 EAL: request: mp_malloc_sync 00:06:23.647 EAL: No shared files mode enabled, IPC is disabled 00:06:23.647 EAL: Heap on socket 0 was shrunk by 130MB 00:06:23.647 EAL: Trying to obtain current memory policy. 00:06:23.647 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.647 EAL: Restoring previous memory policy: 4 00:06:23.647 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.647 EAL: request: mp_malloc_sync 00:06:23.647 EAL: No shared files mode enabled, IPC is disabled 00:06:23.647 EAL: Heap on socket 0 was expanded by 258MB 00:06:23.647 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.647 EAL: request: mp_malloc_sync 00:06:23.647 EAL: No shared files mode enabled, IPC is disabled 00:06:23.647 EAL: Heap on socket 0 was shrunk by 258MB 00:06:23.647 EAL: Trying to obtain current memory policy. 00:06:23.647 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.907 EAL: Restoring previous memory policy: 4 00:06:23.907 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.907 EAL: request: mp_malloc_sync 00:06:23.907 EAL: No shared files mode enabled, IPC is disabled 00:06:23.907 EAL: Heap on socket 0 was expanded by 514MB 00:06:23.907 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.907 EAL: request: mp_malloc_sync 00:06:23.907 EAL: No shared files mode enabled, IPC is disabled 00:06:23.907 EAL: Heap on socket 0 was shrunk by 514MB 00:06:23.907 EAL: Trying to obtain current memory policy. 00:06:23.907 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:24.166 EAL: Restoring previous memory policy: 4 00:06:24.166 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.166 EAL: request: mp_malloc_sync 00:06:24.166 EAL: No shared files mode enabled, IPC is disabled 00:06:24.166 EAL: Heap on socket 0 was expanded by 1026MB 00:06:24.426 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.426 passed 00:06:24.426 00:06:24.426 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.426 suites 1 1 n/a 0 0 00:06:24.426 tests 2 2 2 0 0 00:06:24.426 asserts 5302 5302 5302 0 n/a 00:06:24.426 00:06:24.426 Elapsed time = 1.369 seconds 00:06:24.426 EAL: request: mp_malloc_sync 00:06:24.426 EAL: No shared files mode enabled, IPC is disabled 00:06:24.426 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:24.426 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.426 EAL: request: mp_malloc_sync 00:06:24.426 EAL: No shared files mode enabled, IPC is disabled 00:06:24.426 EAL: Heap on socket 0 was shrunk by 2MB 00:06:24.426 EAL: No shared files mode enabled, IPC is disabled 00:06:24.426 EAL: No shared files mode enabled, IPC is disabled 00:06:24.426 EAL: No shared files mode enabled, IPC is disabled 00:06:24.686 00:06:24.686 real 0m1.615s 00:06:24.686 user 0m0.761s 00:06:24.686 sys 0m0.722s 00:06:24.686 02:20:43 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.686 02:20:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:24.686 ************************************ 00:06:24.686 END TEST env_vtophys 00:06:24.686 ************************************ 00:06:24.686 02:20:43 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:24.686 02:20:43 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.686 02:20:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.686 02:20:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:24.686 ************************************ 00:06:24.686 START TEST env_pci 00:06:24.686 ************************************ 00:06:24.686 02:20:43 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:24.686 00:06:24.686 00:06:24.686 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.686 http://cunit.sourceforge.net/ 00:06:24.686 00:06:24.686 00:06:24.686 Suite: pci 00:06:24.686 Test: pci_hook ...[2024-10-13 02:20:43.228377] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68861 has claimed it 00:06:24.686 passed 00:06:24.686 00:06:24.686 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.686 suites 1 1 n/a 0 0 00:06:24.686 tests 1 1 1 0 0 00:06:24.687 asserts 25 25 25 0 n/a 00:06:24.687 00:06:24.687 Elapsed time = 0.007 secondsEAL: Cannot find device (10000:00:01.0) 00:06:24.687 EAL: Failed to attach device on primary process 00:06:24.687 00:06:24.687 00:06:24.687 real 0m0.092s 00:06:24.687 user 0m0.040s 00:06:24.687 sys 0m0.051s 00:06:24.687 02:20:43 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.687 02:20:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:24.687 ************************************ 00:06:24.687 END TEST env_pci 00:06:24.687 ************************************ 00:06:24.687 02:20:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:24.687 02:20:43 env -- env/env.sh@15 -- # uname 00:06:24.687 02:20:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:24.687 02:20:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:24.687 02:20:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:24.687 02:20:43 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:24.687 02:20:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.687 02:20:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:24.946 ************************************ 00:06:24.946 START TEST env_dpdk_post_init 00:06:24.946 ************************************ 00:06:24.946 02:20:43 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:24.946 EAL: Detected CPU lcores: 10 00:06:24.946 EAL: Detected NUMA nodes: 1 00:06:24.946 EAL: Detected shared linkage of DPDK 00:06:24.946 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:24.946 EAL: Selected IOVA mode 'PA' 00:06:24.946 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:24.946 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:24.946 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:24.946 Starting DPDK initialization... 00:06:24.946 Starting SPDK post initialization... 00:06:24.946 SPDK NVMe probe 00:06:24.946 Attaching to 0000:00:10.0 00:06:24.946 Attaching to 0000:00:11.0 00:06:24.946 Attached to 0000:00:10.0 00:06:24.946 Attached to 0000:00:11.0 00:06:24.946 Cleaning up... 00:06:24.946 00:06:24.946 real 0m0.229s 00:06:24.946 user 0m0.059s 00:06:24.946 sys 0m0.072s 00:06:24.946 02:20:43 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.946 02:20:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:24.946 ************************************ 00:06:24.946 END TEST env_dpdk_post_init 00:06:24.946 ************************************ 00:06:25.206 02:20:43 env -- env/env.sh@26 -- # uname 00:06:25.206 02:20:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:25.206 02:20:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:25.206 02:20:43 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.206 02:20:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.206 02:20:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.206 ************************************ 00:06:25.206 START TEST env_mem_callbacks 00:06:25.206 ************************************ 00:06:25.206 02:20:43 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:25.206 EAL: Detected CPU lcores: 10 00:06:25.206 EAL: Detected NUMA nodes: 1 00:06:25.206 EAL: Detected shared linkage of DPDK 00:06:25.206 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:25.206 EAL: Selected IOVA mode 'PA' 00:06:25.206 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:25.206 00:06:25.206 00:06:25.206 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.206 http://cunit.sourceforge.net/ 00:06:25.206 00:06:25.206 00:06:25.206 Suite: memory 00:06:25.206 Test: test ... 00:06:25.206 register 0x200000200000 2097152 00:06:25.206 malloc 3145728 00:06:25.206 register 0x200000400000 4194304 00:06:25.206 buf 0x200000500000 len 3145728 PASSED 00:06:25.206 malloc 64 00:06:25.206 buf 0x2000004fff40 len 64 PASSED 00:06:25.206 malloc 4194304 00:06:25.206 register 0x200000800000 6291456 00:06:25.207 buf 0x200000a00000 len 4194304 PASSED 00:06:25.207 free 0x200000500000 3145728 00:06:25.207 free 0x2000004fff40 64 00:06:25.207 unregister 0x200000400000 4194304 PASSED 00:06:25.207 free 0x200000a00000 4194304 00:06:25.207 unregister 0x200000800000 6291456 PASSED 00:06:25.207 malloc 8388608 00:06:25.207 register 0x200000400000 10485760 00:06:25.207 buf 0x200000600000 len 8388608 PASSED 00:06:25.207 free 0x200000600000 8388608 00:06:25.207 unregister 0x200000400000 10485760 PASSED 00:06:25.207 passed 00:06:25.207 00:06:25.207 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.207 suites 1 1 n/a 0 0 00:06:25.207 tests 1 1 1 0 0 00:06:25.207 asserts 15 15 15 0 n/a 00:06:25.207 00:06:25.207 Elapsed time = 0.011 seconds 00:06:25.207 00:06:25.207 real 0m0.182s 00:06:25.207 user 0m0.029s 00:06:25.207 sys 0m0.051s 00:06:25.207 02:20:43 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.207 02:20:43 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:25.207 ************************************ 00:06:25.207 END TEST env_mem_callbacks 00:06:25.207 ************************************ 00:06:25.466 00:06:25.466 real 0m2.993s 00:06:25.466 user 0m1.368s 00:06:25.466 sys 0m1.298s 00:06:25.466 02:20:43 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.466 02:20:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.466 ************************************ 00:06:25.466 END TEST env 00:06:25.466 ************************************ 00:06:25.466 02:20:43 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:25.466 02:20:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.466 02:20:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.466 02:20:43 -- common/autotest_common.sh@10 -- # set +x 00:06:25.466 ************************************ 00:06:25.466 START TEST rpc 00:06:25.466 ************************************ 00:06:25.466 02:20:43 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:25.466 * Looking for test storage... 00:06:25.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:25.466 02:20:44 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:25.466 02:20:44 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:25.466 02:20:44 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:25.726 02:20:44 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:25.726 02:20:44 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.726 02:20:44 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.726 02:20:44 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.726 02:20:44 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.726 02:20:44 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.726 02:20:44 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.726 02:20:44 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.726 02:20:44 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.726 02:20:44 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.726 02:20:44 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.726 02:20:44 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.726 02:20:44 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:25.726 02:20:44 rpc -- scripts/common.sh@345 -- # : 1 00:06:25.726 02:20:44 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.726 02:20:44 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.726 02:20:44 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:25.726 02:20:44 rpc -- scripts/common.sh@353 -- # local d=1 00:06:25.726 02:20:44 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.726 02:20:44 rpc -- scripts/common.sh@355 -- # echo 1 00:06:25.726 02:20:44 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.726 02:20:44 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:25.726 02:20:44 rpc -- scripts/common.sh@353 -- # local d=2 00:06:25.726 02:20:44 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.726 02:20:44 rpc -- scripts/common.sh@355 -- # echo 2 00:06:25.726 02:20:44 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.726 02:20:44 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.726 02:20:44 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.726 02:20:44 rpc -- scripts/common.sh@368 -- # return 0 00:06:25.726 02:20:44 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.726 02:20:44 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:25.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.726 --rc genhtml_branch_coverage=1 00:06:25.726 --rc genhtml_function_coverage=1 00:06:25.726 --rc genhtml_legend=1 00:06:25.726 --rc geninfo_all_blocks=1 00:06:25.726 --rc geninfo_unexecuted_blocks=1 00:06:25.726 00:06:25.726 ' 00:06:25.726 02:20:44 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:25.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.726 --rc genhtml_branch_coverage=1 00:06:25.726 --rc genhtml_function_coverage=1 00:06:25.726 --rc genhtml_legend=1 00:06:25.726 --rc geninfo_all_blocks=1 00:06:25.726 --rc geninfo_unexecuted_blocks=1 00:06:25.726 00:06:25.726 ' 00:06:25.727 02:20:44 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:25.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.727 --rc genhtml_branch_coverage=1 00:06:25.727 --rc genhtml_function_coverage=1 00:06:25.727 --rc genhtml_legend=1 00:06:25.727 --rc geninfo_all_blocks=1 00:06:25.727 --rc geninfo_unexecuted_blocks=1 00:06:25.727 00:06:25.727 ' 00:06:25.727 02:20:44 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:25.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.727 --rc genhtml_branch_coverage=1 00:06:25.727 --rc genhtml_function_coverage=1 00:06:25.727 --rc genhtml_legend=1 00:06:25.727 --rc geninfo_all_blocks=1 00:06:25.727 --rc geninfo_unexecuted_blocks=1 00:06:25.727 00:06:25.727 ' 00:06:25.727 02:20:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68988 00:06:25.727 02:20:44 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:25.727 02:20:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:25.727 02:20:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68988 00:06:25.727 02:20:44 rpc -- common/autotest_common.sh@831 -- # '[' -z 68988 ']' 00:06:25.727 02:20:44 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.727 02:20:44 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.727 02:20:44 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.727 02:20:44 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.727 02:20:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.727 [2024-10-13 02:20:44.318529] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:25.727 [2024-10-13 02:20:44.318668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68988 ] 00:06:25.986 [2024-10-13 02:20:44.454421] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.986 [2024-10-13 02:20:44.499963] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:25.986 [2024-10-13 02:20:44.500037] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68988' to capture a snapshot of events at runtime. 00:06:25.986 [2024-10-13 02:20:44.500050] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:25.986 [2024-10-13 02:20:44.500059] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:25.986 [2024-10-13 02:20:44.500079] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68988 for offline analysis/debug. 00:06:25.986 [2024-10-13 02:20:44.500133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.554 02:20:45 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.554 02:20:45 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:26.554 02:20:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:26.554 02:20:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:26.554 02:20:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:26.554 02:20:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:26.554 02:20:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.554 02:20:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.554 02:20:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.554 ************************************ 00:06:26.554 START TEST rpc_integrity 00:06:26.554 ************************************ 00:06:26.554 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:26.554 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:26.554 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.554 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.554 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.554 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:26.554 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:26.554 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:26.554 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:26.554 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.554 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.554 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.554 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:26.554 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:26.554 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.554 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.814 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.814 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:26.814 { 00:06:26.814 "name": "Malloc0", 00:06:26.814 "aliases": [ 00:06:26.814 "e428506a-f14a-4147-b339-00f1dbf801a2" 00:06:26.814 ], 00:06:26.814 "product_name": "Malloc disk", 00:06:26.814 "block_size": 512, 00:06:26.814 "num_blocks": 16384, 00:06:26.814 "uuid": "e428506a-f14a-4147-b339-00f1dbf801a2", 00:06:26.814 "assigned_rate_limits": { 00:06:26.814 "rw_ios_per_sec": 0, 00:06:26.814 "rw_mbytes_per_sec": 0, 00:06:26.814 "r_mbytes_per_sec": 0, 00:06:26.814 "w_mbytes_per_sec": 0 00:06:26.814 }, 00:06:26.814 "claimed": false, 00:06:26.814 "zoned": false, 00:06:26.814 "supported_io_types": { 00:06:26.814 "read": true, 00:06:26.814 "write": true, 00:06:26.814 "unmap": true, 00:06:26.814 "flush": true, 00:06:26.814 "reset": true, 00:06:26.814 "nvme_admin": false, 00:06:26.814 "nvme_io": false, 00:06:26.814 "nvme_io_md": false, 00:06:26.814 "write_zeroes": true, 00:06:26.814 "zcopy": true, 00:06:26.814 "get_zone_info": false, 00:06:26.814 "zone_management": false, 00:06:26.814 "zone_append": false, 00:06:26.814 "compare": false, 00:06:26.814 "compare_and_write": false, 00:06:26.814 "abort": true, 00:06:26.814 "seek_hole": false, 00:06:26.814 "seek_data": false, 00:06:26.814 "copy": true, 00:06:26.814 "nvme_iov_md": false 00:06:26.814 }, 00:06:26.814 "memory_domains": [ 00:06:26.814 { 00:06:26.814 "dma_device_id": "system", 00:06:26.814 "dma_device_type": 1 00:06:26.814 }, 00:06:26.814 { 00:06:26.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.814 "dma_device_type": 2 00:06:26.814 } 00:06:26.814 ], 00:06:26.814 "driver_specific": {} 00:06:26.814 } 00:06:26.814 ]' 00:06:26.815 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:26.815 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:26.815 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:26.815 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.815 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.815 [2024-10-13 02:20:45.290373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:26.815 [2024-10-13 02:20:45.290485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:26.815 [2024-10-13 02:20:45.290516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:06:26.815 [2024-10-13 02:20:45.290526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:26.815 [2024-10-13 02:20:45.292847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:26.815 [2024-10-13 02:20:45.292909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:26.815 Passthru0 00:06:26.815 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.815 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:26.815 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.815 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.815 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.815 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:26.815 { 00:06:26.815 "name": "Malloc0", 00:06:26.815 "aliases": [ 00:06:26.815 "e428506a-f14a-4147-b339-00f1dbf801a2" 00:06:26.815 ], 00:06:26.815 "product_name": "Malloc disk", 00:06:26.815 "block_size": 512, 00:06:26.815 "num_blocks": 16384, 00:06:26.815 "uuid": "e428506a-f14a-4147-b339-00f1dbf801a2", 00:06:26.815 "assigned_rate_limits": { 00:06:26.815 "rw_ios_per_sec": 0, 00:06:26.815 "rw_mbytes_per_sec": 0, 00:06:26.815 "r_mbytes_per_sec": 0, 00:06:26.815 "w_mbytes_per_sec": 0 00:06:26.815 }, 00:06:26.815 "claimed": true, 00:06:26.815 "claim_type": "exclusive_write", 00:06:26.815 "zoned": false, 00:06:26.815 "supported_io_types": { 00:06:26.815 "read": true, 00:06:26.815 "write": true, 00:06:26.815 "unmap": true, 00:06:26.815 "flush": true, 00:06:26.815 "reset": true, 00:06:26.815 "nvme_admin": false, 00:06:26.815 "nvme_io": false, 00:06:26.815 "nvme_io_md": false, 00:06:26.815 "write_zeroes": true, 00:06:26.815 "zcopy": true, 00:06:26.815 "get_zone_info": false, 00:06:26.815 "zone_management": false, 00:06:26.815 "zone_append": false, 00:06:26.815 "compare": false, 00:06:26.815 "compare_and_write": false, 00:06:26.815 "abort": true, 00:06:26.815 "seek_hole": false, 00:06:26.815 "seek_data": false, 00:06:26.815 "copy": true, 00:06:26.815 "nvme_iov_md": false 00:06:26.815 }, 00:06:26.815 "memory_domains": [ 00:06:26.815 { 00:06:26.815 "dma_device_id": "system", 00:06:26.815 "dma_device_type": 1 00:06:26.815 }, 00:06:26.815 { 00:06:26.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.815 "dma_device_type": 2 00:06:26.815 } 00:06:26.815 ], 00:06:26.815 "driver_specific": {} 00:06:26.815 }, 00:06:26.815 { 00:06:26.815 "name": "Passthru0", 00:06:26.815 "aliases": [ 00:06:26.815 "4a218094-64a2-5860-8a70-2bd93dc57ef6" 00:06:26.815 ], 00:06:26.815 "product_name": "passthru", 00:06:26.815 "block_size": 512, 00:06:26.815 "num_blocks": 16384, 00:06:26.815 "uuid": "4a218094-64a2-5860-8a70-2bd93dc57ef6", 00:06:26.815 "assigned_rate_limits": { 00:06:26.815 "rw_ios_per_sec": 0, 00:06:26.815 "rw_mbytes_per_sec": 0, 00:06:26.815 "r_mbytes_per_sec": 0, 00:06:26.815 "w_mbytes_per_sec": 0 00:06:26.815 }, 00:06:26.815 "claimed": false, 00:06:26.815 "zoned": false, 00:06:26.815 "supported_io_types": { 00:06:26.815 "read": true, 00:06:26.815 "write": true, 00:06:26.815 "unmap": true, 00:06:26.815 "flush": true, 00:06:26.815 "reset": true, 00:06:26.815 "nvme_admin": false, 00:06:26.815 "nvme_io": false, 00:06:26.815 "nvme_io_md": false, 00:06:26.815 "write_zeroes": true, 00:06:26.815 "zcopy": true, 00:06:26.815 "get_zone_info": false, 00:06:26.815 "zone_management": false, 00:06:26.815 "zone_append": false, 00:06:26.815 "compare": false, 00:06:26.815 "compare_and_write": false, 00:06:26.815 "abort": true, 00:06:26.815 "seek_hole": false, 00:06:26.815 "seek_data": false, 00:06:26.815 "copy": true, 00:06:26.815 "nvme_iov_md": false 00:06:26.815 }, 00:06:26.815 "memory_domains": [ 00:06:26.815 { 00:06:26.815 "dma_device_id": "system", 00:06:26.815 "dma_device_type": 1 00:06:26.815 }, 00:06:26.815 { 00:06:26.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.815 "dma_device_type": 2 00:06:26.815 } 00:06:26.815 ], 00:06:26.815 "driver_specific": { 00:06:26.815 "passthru": { 00:06:26.815 "name": "Passthru0", 00:06:26.815 "base_bdev_name": "Malloc0" 00:06:26.815 } 00:06:26.815 } 00:06:26.815 } 00:06:26.815 ]' 00:06:26.815 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:26.815 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:26.815 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:26.815 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.815 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.815 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.815 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:26.815 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.815 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.815 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.815 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:26.815 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.815 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.815 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.815 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:26.815 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:26.815 02:20:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:26.815 00:06:26.815 real 0m0.323s 00:06:26.815 user 0m0.191s 00:06:26.815 sys 0m0.057s 00:06:26.815 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.815 02:20:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.815 ************************************ 00:06:26.815 END TEST rpc_integrity 00:06:26.815 ************************************ 00:06:27.074 02:20:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:27.074 02:20:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.074 02:20:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.074 02:20:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.074 ************************************ 00:06:27.074 START TEST rpc_plugins 00:06:27.074 ************************************ 00:06:27.074 02:20:45 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:27.074 02:20:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:27.074 02:20:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.074 02:20:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:27.074 02:20:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.074 02:20:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:27.074 02:20:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:27.074 02:20:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.074 02:20:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:27.074 02:20:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.074 02:20:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:27.074 { 00:06:27.074 "name": "Malloc1", 00:06:27.074 "aliases": [ 00:06:27.074 "66e59163-cbf2-4694-9963-76dcc5e1c1b4" 00:06:27.074 ], 00:06:27.074 "product_name": "Malloc disk", 00:06:27.074 "block_size": 4096, 00:06:27.074 "num_blocks": 256, 00:06:27.074 "uuid": "66e59163-cbf2-4694-9963-76dcc5e1c1b4", 00:06:27.074 "assigned_rate_limits": { 00:06:27.074 "rw_ios_per_sec": 0, 00:06:27.074 "rw_mbytes_per_sec": 0, 00:06:27.074 "r_mbytes_per_sec": 0, 00:06:27.074 "w_mbytes_per_sec": 0 00:06:27.074 }, 00:06:27.074 "claimed": false, 00:06:27.074 "zoned": false, 00:06:27.074 "supported_io_types": { 00:06:27.074 "read": true, 00:06:27.074 "write": true, 00:06:27.074 "unmap": true, 00:06:27.074 "flush": true, 00:06:27.074 "reset": true, 00:06:27.074 "nvme_admin": false, 00:06:27.074 "nvme_io": false, 00:06:27.074 "nvme_io_md": false, 00:06:27.074 "write_zeroes": true, 00:06:27.074 "zcopy": true, 00:06:27.074 "get_zone_info": false, 00:06:27.074 "zone_management": false, 00:06:27.074 "zone_append": false, 00:06:27.074 "compare": false, 00:06:27.074 "compare_and_write": false, 00:06:27.074 "abort": true, 00:06:27.074 "seek_hole": false, 00:06:27.074 "seek_data": false, 00:06:27.074 "copy": true, 00:06:27.074 "nvme_iov_md": false 00:06:27.074 }, 00:06:27.074 "memory_domains": [ 00:06:27.074 { 00:06:27.074 "dma_device_id": "system", 00:06:27.074 "dma_device_type": 1 00:06:27.074 }, 00:06:27.074 { 00:06:27.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.074 "dma_device_type": 2 00:06:27.074 } 00:06:27.074 ], 00:06:27.074 "driver_specific": {} 00:06:27.074 } 00:06:27.074 ]' 00:06:27.074 02:20:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:27.074 02:20:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:27.074 02:20:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:27.074 02:20:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.074 02:20:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:27.074 02:20:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.074 02:20:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:27.074 02:20:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.074 02:20:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:27.074 02:20:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.074 02:20:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:27.074 02:20:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:27.074 02:20:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:27.074 00:06:27.074 real 0m0.162s 00:06:27.074 user 0m0.095s 00:06:27.074 sys 0m0.025s 00:06:27.074 02:20:45 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.074 02:20:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:27.074 ************************************ 00:06:27.074 END TEST rpc_plugins 00:06:27.074 ************************************ 00:06:27.074 02:20:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:27.074 02:20:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.074 02:20:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.074 02:20:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.074 ************************************ 00:06:27.074 START TEST rpc_trace_cmd_test 00:06:27.074 ************************************ 00:06:27.074 02:20:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:27.074 02:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:27.074 02:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:27.074 02:20:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.074 02:20:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.333 02:20:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.333 02:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:27.333 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68988", 00:06:27.333 "tpoint_group_mask": "0x8", 00:06:27.333 "iscsi_conn": { 00:06:27.333 "mask": "0x2", 00:06:27.333 "tpoint_mask": "0x0" 00:06:27.333 }, 00:06:27.333 "scsi": { 00:06:27.333 "mask": "0x4", 00:06:27.333 "tpoint_mask": "0x0" 00:06:27.333 }, 00:06:27.333 "bdev": { 00:06:27.333 "mask": "0x8", 00:06:27.333 "tpoint_mask": "0xffffffffffffffff" 00:06:27.333 }, 00:06:27.333 "nvmf_rdma": { 00:06:27.333 "mask": "0x10", 00:06:27.333 "tpoint_mask": "0x0" 00:06:27.333 }, 00:06:27.333 "nvmf_tcp": { 00:06:27.333 "mask": "0x20", 00:06:27.333 "tpoint_mask": "0x0" 00:06:27.333 }, 00:06:27.333 "ftl": { 00:06:27.333 "mask": "0x40", 00:06:27.333 "tpoint_mask": "0x0" 00:06:27.333 }, 00:06:27.333 "blobfs": { 00:06:27.333 "mask": "0x80", 00:06:27.333 "tpoint_mask": "0x0" 00:06:27.333 }, 00:06:27.333 "dsa": { 00:06:27.333 "mask": "0x200", 00:06:27.333 "tpoint_mask": "0x0" 00:06:27.333 }, 00:06:27.333 "thread": { 00:06:27.333 "mask": "0x400", 00:06:27.333 "tpoint_mask": "0x0" 00:06:27.333 }, 00:06:27.333 "nvme_pcie": { 00:06:27.333 "mask": "0x800", 00:06:27.333 "tpoint_mask": "0x0" 00:06:27.333 }, 00:06:27.333 "iaa": { 00:06:27.333 "mask": "0x1000", 00:06:27.333 "tpoint_mask": "0x0" 00:06:27.333 }, 00:06:27.333 "nvme_tcp": { 00:06:27.333 "mask": "0x2000", 00:06:27.333 "tpoint_mask": "0x0" 00:06:27.333 }, 00:06:27.333 "bdev_nvme": { 00:06:27.333 "mask": "0x4000", 00:06:27.333 "tpoint_mask": "0x0" 00:06:27.333 }, 00:06:27.333 "sock": { 00:06:27.333 "mask": "0x8000", 00:06:27.333 "tpoint_mask": "0x0" 00:06:27.333 }, 00:06:27.333 "blob": { 00:06:27.333 "mask": "0x10000", 00:06:27.333 "tpoint_mask": "0x0" 00:06:27.333 }, 00:06:27.333 "bdev_raid": { 00:06:27.333 "mask": "0x20000", 00:06:27.333 "tpoint_mask": "0x0" 00:06:27.333 } 00:06:27.333 }' 00:06:27.333 02:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:27.333 02:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:06:27.333 02:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:27.333 02:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:27.333 02:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:27.333 02:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:27.333 02:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:27.333 02:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:27.333 02:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:27.333 02:20:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:27.333 00:06:27.333 real 0m0.267s 00:06:27.333 user 0m0.207s 00:06:27.333 sys 0m0.048s 00:06:27.333 02:20:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.334 02:20:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.334 ************************************ 00:06:27.334 END TEST rpc_trace_cmd_test 00:06:27.334 ************************************ 00:06:27.591 02:20:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:27.592 02:20:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:27.592 02:20:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:27.592 02:20:46 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.592 02:20:46 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.592 02:20:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.592 ************************************ 00:06:27.592 START TEST rpc_daemon_integrity 00:06:27.592 ************************************ 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:27.592 { 00:06:27.592 "name": "Malloc2", 00:06:27.592 "aliases": [ 00:06:27.592 "7946b7d8-86f1-4697-9835-bab759d14cd3" 00:06:27.592 ], 00:06:27.592 "product_name": "Malloc disk", 00:06:27.592 "block_size": 512, 00:06:27.592 "num_blocks": 16384, 00:06:27.592 "uuid": "7946b7d8-86f1-4697-9835-bab759d14cd3", 00:06:27.592 "assigned_rate_limits": { 00:06:27.592 "rw_ios_per_sec": 0, 00:06:27.592 "rw_mbytes_per_sec": 0, 00:06:27.592 "r_mbytes_per_sec": 0, 00:06:27.592 "w_mbytes_per_sec": 0 00:06:27.592 }, 00:06:27.592 "claimed": false, 00:06:27.592 "zoned": false, 00:06:27.592 "supported_io_types": { 00:06:27.592 "read": true, 00:06:27.592 "write": true, 00:06:27.592 "unmap": true, 00:06:27.592 "flush": true, 00:06:27.592 "reset": true, 00:06:27.592 "nvme_admin": false, 00:06:27.592 "nvme_io": false, 00:06:27.592 "nvme_io_md": false, 00:06:27.592 "write_zeroes": true, 00:06:27.592 "zcopy": true, 00:06:27.592 "get_zone_info": false, 00:06:27.592 "zone_management": false, 00:06:27.592 "zone_append": false, 00:06:27.592 "compare": false, 00:06:27.592 "compare_and_write": false, 00:06:27.592 "abort": true, 00:06:27.592 "seek_hole": false, 00:06:27.592 "seek_data": false, 00:06:27.592 "copy": true, 00:06:27.592 "nvme_iov_md": false 00:06:27.592 }, 00:06:27.592 "memory_domains": [ 00:06:27.592 { 00:06:27.592 "dma_device_id": "system", 00:06:27.592 "dma_device_type": 1 00:06:27.592 }, 00:06:27.592 { 00:06:27.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.592 "dma_device_type": 2 00:06:27.592 } 00:06:27.592 ], 00:06:27.592 "driver_specific": {} 00:06:27.592 } 00:06:27.592 ]' 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.592 [2024-10-13 02:20:46.225416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:27.592 [2024-10-13 02:20:46.225512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:27.592 [2024-10-13 02:20:46.225537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:27.592 [2024-10-13 02:20:46.225547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:27.592 [2024-10-13 02:20:46.227972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:27.592 [2024-10-13 02:20:46.228011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:27.592 Passthru0 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:27.592 { 00:06:27.592 "name": "Malloc2", 00:06:27.592 "aliases": [ 00:06:27.592 "7946b7d8-86f1-4697-9835-bab759d14cd3" 00:06:27.592 ], 00:06:27.592 "product_name": "Malloc disk", 00:06:27.592 "block_size": 512, 00:06:27.592 "num_blocks": 16384, 00:06:27.592 "uuid": "7946b7d8-86f1-4697-9835-bab759d14cd3", 00:06:27.592 "assigned_rate_limits": { 00:06:27.592 "rw_ios_per_sec": 0, 00:06:27.592 "rw_mbytes_per_sec": 0, 00:06:27.592 "r_mbytes_per_sec": 0, 00:06:27.592 "w_mbytes_per_sec": 0 00:06:27.592 }, 00:06:27.592 "claimed": true, 00:06:27.592 "claim_type": "exclusive_write", 00:06:27.592 "zoned": false, 00:06:27.592 "supported_io_types": { 00:06:27.592 "read": true, 00:06:27.592 "write": true, 00:06:27.592 "unmap": true, 00:06:27.592 "flush": true, 00:06:27.592 "reset": true, 00:06:27.592 "nvme_admin": false, 00:06:27.592 "nvme_io": false, 00:06:27.592 "nvme_io_md": false, 00:06:27.592 "write_zeroes": true, 00:06:27.592 "zcopy": true, 00:06:27.592 "get_zone_info": false, 00:06:27.592 "zone_management": false, 00:06:27.592 "zone_append": false, 00:06:27.592 "compare": false, 00:06:27.592 "compare_and_write": false, 00:06:27.592 "abort": true, 00:06:27.592 "seek_hole": false, 00:06:27.592 "seek_data": false, 00:06:27.592 "copy": true, 00:06:27.592 "nvme_iov_md": false 00:06:27.592 }, 00:06:27.592 "memory_domains": [ 00:06:27.592 { 00:06:27.592 "dma_device_id": "system", 00:06:27.592 "dma_device_type": 1 00:06:27.592 }, 00:06:27.592 { 00:06:27.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.592 "dma_device_type": 2 00:06:27.592 } 00:06:27.592 ], 00:06:27.592 "driver_specific": {} 00:06:27.592 }, 00:06:27.592 { 00:06:27.592 "name": "Passthru0", 00:06:27.592 "aliases": [ 00:06:27.592 "dd50b4cd-41de-53b0-9364-d241945dcc4d" 00:06:27.592 ], 00:06:27.592 "product_name": "passthru", 00:06:27.592 "block_size": 512, 00:06:27.592 "num_blocks": 16384, 00:06:27.592 "uuid": "dd50b4cd-41de-53b0-9364-d241945dcc4d", 00:06:27.592 "assigned_rate_limits": { 00:06:27.592 "rw_ios_per_sec": 0, 00:06:27.592 "rw_mbytes_per_sec": 0, 00:06:27.592 "r_mbytes_per_sec": 0, 00:06:27.592 "w_mbytes_per_sec": 0 00:06:27.592 }, 00:06:27.592 "claimed": false, 00:06:27.592 "zoned": false, 00:06:27.592 "supported_io_types": { 00:06:27.592 "read": true, 00:06:27.592 "write": true, 00:06:27.592 "unmap": true, 00:06:27.592 "flush": true, 00:06:27.592 "reset": true, 00:06:27.592 "nvme_admin": false, 00:06:27.592 "nvme_io": false, 00:06:27.592 "nvme_io_md": false, 00:06:27.592 "write_zeroes": true, 00:06:27.592 "zcopy": true, 00:06:27.592 "get_zone_info": false, 00:06:27.592 "zone_management": false, 00:06:27.592 "zone_append": false, 00:06:27.592 "compare": false, 00:06:27.592 "compare_and_write": false, 00:06:27.592 "abort": true, 00:06:27.592 "seek_hole": false, 00:06:27.592 "seek_data": false, 00:06:27.592 "copy": true, 00:06:27.592 "nvme_iov_md": false 00:06:27.592 }, 00:06:27.592 "memory_domains": [ 00:06:27.592 { 00:06:27.592 "dma_device_id": "system", 00:06:27.592 "dma_device_type": 1 00:06:27.592 }, 00:06:27.592 { 00:06:27.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.592 "dma_device_type": 2 00:06:27.592 } 00:06:27.592 ], 00:06:27.592 "driver_specific": { 00:06:27.592 "passthru": { 00:06:27.592 "name": "Passthru0", 00:06:27.592 "base_bdev_name": "Malloc2" 00:06:27.592 } 00:06:27.592 } 00:06:27.592 } 00:06:27.592 ]' 00:06:27.592 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:27.855 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:27.855 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:27.855 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.855 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.855 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.855 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:27.855 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.855 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.855 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.855 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:27.855 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.855 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.855 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.855 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:27.855 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:27.855 02:20:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:27.855 00:06:27.855 real 0m0.299s 00:06:27.855 user 0m0.170s 00:06:27.855 sys 0m0.055s 00:06:27.855 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.855 02:20:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.855 ************************************ 00:06:27.855 END TEST rpc_daemon_integrity 00:06:27.855 ************************************ 00:06:27.855 02:20:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:27.855 02:20:46 rpc -- rpc/rpc.sh@84 -- # killprocess 68988 00:06:27.855 02:20:46 rpc -- common/autotest_common.sh@950 -- # '[' -z 68988 ']' 00:06:27.855 02:20:46 rpc -- common/autotest_common.sh@954 -- # kill -0 68988 00:06:27.855 02:20:46 rpc -- common/autotest_common.sh@955 -- # uname 00:06:27.855 02:20:46 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.855 02:20:46 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68988 00:06:27.855 02:20:46 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.855 02:20:46 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.855 killing process with pid 68988 00:06:27.855 02:20:46 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68988' 00:06:27.855 02:20:46 rpc -- common/autotest_common.sh@969 -- # kill 68988 00:06:27.855 02:20:46 rpc -- common/autotest_common.sh@974 -- # wait 68988 00:06:28.429 00:06:28.429 real 0m2.887s 00:06:28.429 user 0m3.427s 00:06:28.429 sys 0m0.890s 00:06:28.429 02:20:46 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.429 02:20:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.429 ************************************ 00:06:28.429 END TEST rpc 00:06:28.429 ************************************ 00:06:28.429 02:20:46 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:28.429 02:20:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.429 02:20:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.429 02:20:46 -- common/autotest_common.sh@10 -- # set +x 00:06:28.429 ************************************ 00:06:28.429 START TEST skip_rpc 00:06:28.429 ************************************ 00:06:28.429 02:20:46 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:28.429 * Looking for test storage... 00:06:28.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:28.429 02:20:47 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:28.429 02:20:47 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:28.429 02:20:47 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:28.687 02:20:47 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:28.687 02:20:47 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.687 02:20:47 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.687 02:20:47 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.687 02:20:47 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.687 02:20:47 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.687 02:20:47 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.688 02:20:47 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:28.688 02:20:47 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.688 02:20:47 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:28.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.688 --rc genhtml_branch_coverage=1 00:06:28.688 --rc genhtml_function_coverage=1 00:06:28.688 --rc genhtml_legend=1 00:06:28.688 --rc geninfo_all_blocks=1 00:06:28.688 --rc geninfo_unexecuted_blocks=1 00:06:28.688 00:06:28.688 ' 00:06:28.688 02:20:47 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:28.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.688 --rc genhtml_branch_coverage=1 00:06:28.688 --rc genhtml_function_coverage=1 00:06:28.688 --rc genhtml_legend=1 00:06:28.688 --rc geninfo_all_blocks=1 00:06:28.688 --rc geninfo_unexecuted_blocks=1 00:06:28.688 00:06:28.688 ' 00:06:28.688 02:20:47 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:28.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.688 --rc genhtml_branch_coverage=1 00:06:28.688 --rc genhtml_function_coverage=1 00:06:28.688 --rc genhtml_legend=1 00:06:28.688 --rc geninfo_all_blocks=1 00:06:28.688 --rc geninfo_unexecuted_blocks=1 00:06:28.688 00:06:28.688 ' 00:06:28.688 02:20:47 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:28.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.688 --rc genhtml_branch_coverage=1 00:06:28.688 --rc genhtml_function_coverage=1 00:06:28.688 --rc genhtml_legend=1 00:06:28.688 --rc geninfo_all_blocks=1 00:06:28.688 --rc geninfo_unexecuted_blocks=1 00:06:28.688 00:06:28.688 ' 00:06:28.688 02:20:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:28.688 02:20:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:28.688 02:20:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:28.688 02:20:47 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.688 02:20:47 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.688 02:20:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.688 ************************************ 00:06:28.688 START TEST skip_rpc 00:06:28.688 ************************************ 00:06:28.688 02:20:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:28.688 02:20:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69195 00:06:28.688 02:20:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:28.688 02:20:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.688 02:20:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:28.688 [2024-10-13 02:20:47.282139] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:28.688 [2024-10-13 02:20:47.282282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69195 ] 00:06:28.947 [2024-10-13 02:20:47.416331] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.947 [2024-10-13 02:20:47.463746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69195 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69195 ']' 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69195 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69195 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.221 killing process with pid 69195 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69195' 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69195 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69195 00:06:34.221 00:06:34.221 real 0m5.457s 00:06:34.221 user 0m5.051s 00:06:34.221 sys 0m0.333s 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.221 02:20:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.221 ************************************ 00:06:34.221 END TEST skip_rpc 00:06:34.221 ************************************ 00:06:34.221 02:20:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:34.221 02:20:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.221 02:20:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.221 02:20:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.221 ************************************ 00:06:34.221 START TEST skip_rpc_with_json 00:06:34.221 ************************************ 00:06:34.221 02:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:34.221 02:20:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:34.221 02:20:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69283 00:06:34.221 02:20:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.221 02:20:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.221 02:20:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69283 00:06:34.221 02:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69283 ']' 00:06:34.221 02:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.221 02:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.221 02:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.221 02:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.221 02:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:34.221 [2024-10-13 02:20:52.818036] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:34.221 [2024-10-13 02:20:52.818174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69283 ] 00:06:34.481 [2024-10-13 02:20:52.964341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.481 [2024-10-13 02:20:53.008414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.051 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.051 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:35.051 02:20:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:35.051 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.051 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:35.051 [2024-10-13 02:20:53.642709] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:35.051 request: 00:06:35.051 { 00:06:35.051 "trtype": "tcp", 00:06:35.051 "method": "nvmf_get_transports", 00:06:35.051 "req_id": 1 00:06:35.051 } 00:06:35.051 Got JSON-RPC error response 00:06:35.051 response: 00:06:35.051 { 00:06:35.051 "code": -19, 00:06:35.051 "message": "No such device" 00:06:35.051 } 00:06:35.051 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:35.051 02:20:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:35.051 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.051 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:35.051 [2024-10-13 02:20:53.654821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.051 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.051 02:20:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:35.051 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.051 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:35.311 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.311 02:20:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:35.311 { 00:06:35.311 "subsystems": [ 00:06:35.311 { 00:06:35.311 "subsystem": "fsdev", 00:06:35.311 "config": [ 00:06:35.311 { 00:06:35.311 "method": "fsdev_set_opts", 00:06:35.311 "params": { 00:06:35.311 "fsdev_io_pool_size": 65535, 00:06:35.311 "fsdev_io_cache_size": 256 00:06:35.311 } 00:06:35.311 } 00:06:35.311 ] 00:06:35.311 }, 00:06:35.311 { 00:06:35.311 "subsystem": "keyring", 00:06:35.311 "config": [] 00:06:35.311 }, 00:06:35.311 { 00:06:35.311 "subsystem": "iobuf", 00:06:35.311 "config": [ 00:06:35.311 { 00:06:35.311 "method": "iobuf_set_options", 00:06:35.311 "params": { 00:06:35.311 "small_pool_count": 8192, 00:06:35.311 "large_pool_count": 1024, 00:06:35.311 "small_bufsize": 8192, 00:06:35.311 "large_bufsize": 135168 00:06:35.311 } 00:06:35.311 } 00:06:35.311 ] 00:06:35.311 }, 00:06:35.311 { 00:06:35.311 "subsystem": "sock", 00:06:35.311 "config": [ 00:06:35.311 { 00:06:35.311 "method": "sock_set_default_impl", 00:06:35.311 "params": { 00:06:35.311 "impl_name": "posix" 00:06:35.311 } 00:06:35.311 }, 00:06:35.311 { 00:06:35.311 "method": "sock_impl_set_options", 00:06:35.311 "params": { 00:06:35.311 "impl_name": "ssl", 00:06:35.311 "recv_buf_size": 4096, 00:06:35.311 "send_buf_size": 4096, 00:06:35.312 "enable_recv_pipe": true, 00:06:35.312 "enable_quickack": false, 00:06:35.312 "enable_placement_id": 0, 00:06:35.312 "enable_zerocopy_send_server": true, 00:06:35.312 "enable_zerocopy_send_client": false, 00:06:35.312 "zerocopy_threshold": 0, 00:06:35.312 "tls_version": 0, 00:06:35.312 "enable_ktls": false 00:06:35.312 } 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "method": "sock_impl_set_options", 00:06:35.312 "params": { 00:06:35.312 "impl_name": "posix", 00:06:35.312 "recv_buf_size": 2097152, 00:06:35.312 "send_buf_size": 2097152, 00:06:35.312 "enable_recv_pipe": true, 00:06:35.312 "enable_quickack": false, 00:06:35.312 "enable_placement_id": 0, 00:06:35.312 "enable_zerocopy_send_server": true, 00:06:35.312 "enable_zerocopy_send_client": false, 00:06:35.312 "zerocopy_threshold": 0, 00:06:35.312 "tls_version": 0, 00:06:35.312 "enable_ktls": false 00:06:35.312 } 00:06:35.312 } 00:06:35.312 ] 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "subsystem": "vmd", 00:06:35.312 "config": [] 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "subsystem": "accel", 00:06:35.312 "config": [ 00:06:35.312 { 00:06:35.312 "method": "accel_set_options", 00:06:35.312 "params": { 00:06:35.312 "small_cache_size": 128, 00:06:35.312 "large_cache_size": 16, 00:06:35.312 "task_count": 2048, 00:06:35.312 "sequence_count": 2048, 00:06:35.312 "buf_count": 2048 00:06:35.312 } 00:06:35.312 } 00:06:35.312 ] 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "subsystem": "bdev", 00:06:35.312 "config": [ 00:06:35.312 { 00:06:35.312 "method": "bdev_set_options", 00:06:35.312 "params": { 00:06:35.312 "bdev_io_pool_size": 65535, 00:06:35.312 "bdev_io_cache_size": 256, 00:06:35.312 "bdev_auto_examine": true, 00:06:35.312 "iobuf_small_cache_size": 128, 00:06:35.312 "iobuf_large_cache_size": 16 00:06:35.312 } 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "method": "bdev_raid_set_options", 00:06:35.312 "params": { 00:06:35.312 "process_window_size_kb": 1024, 00:06:35.312 "process_max_bandwidth_mb_sec": 0 00:06:35.312 } 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "method": "bdev_iscsi_set_options", 00:06:35.312 "params": { 00:06:35.312 "timeout_sec": 30 00:06:35.312 } 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "method": "bdev_nvme_set_options", 00:06:35.312 "params": { 00:06:35.312 "action_on_timeout": "none", 00:06:35.312 "timeout_us": 0, 00:06:35.312 "timeout_admin_us": 0, 00:06:35.312 "keep_alive_timeout_ms": 10000, 00:06:35.312 "arbitration_burst": 0, 00:06:35.312 "low_priority_weight": 0, 00:06:35.312 "medium_priority_weight": 0, 00:06:35.312 "high_priority_weight": 0, 00:06:35.312 "nvme_adminq_poll_period_us": 10000, 00:06:35.312 "nvme_ioq_poll_period_us": 0, 00:06:35.312 "io_queue_requests": 0, 00:06:35.312 "delay_cmd_submit": true, 00:06:35.312 "transport_retry_count": 4, 00:06:35.312 "bdev_retry_count": 3, 00:06:35.312 "transport_ack_timeout": 0, 00:06:35.312 "ctrlr_loss_timeout_sec": 0, 00:06:35.312 "reconnect_delay_sec": 0, 00:06:35.312 "fast_io_fail_timeout_sec": 0, 00:06:35.312 "disable_auto_failback": false, 00:06:35.312 "generate_uuids": false, 00:06:35.312 "transport_tos": 0, 00:06:35.312 "nvme_error_stat": false, 00:06:35.312 "rdma_srq_size": 0, 00:06:35.312 "io_path_stat": false, 00:06:35.312 "allow_accel_sequence": false, 00:06:35.312 "rdma_max_cq_size": 0, 00:06:35.312 "rdma_cm_event_timeout_ms": 0, 00:06:35.312 "dhchap_digests": [ 00:06:35.312 "sha256", 00:06:35.312 "sha384", 00:06:35.312 "sha512" 00:06:35.312 ], 00:06:35.312 "dhchap_dhgroups": [ 00:06:35.312 "null", 00:06:35.312 "ffdhe2048", 00:06:35.312 "ffdhe3072", 00:06:35.312 "ffdhe4096", 00:06:35.312 "ffdhe6144", 00:06:35.312 "ffdhe8192" 00:06:35.312 ] 00:06:35.312 } 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "method": "bdev_nvme_set_hotplug", 00:06:35.312 "params": { 00:06:35.312 "period_us": 100000, 00:06:35.312 "enable": false 00:06:35.312 } 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "method": "bdev_wait_for_examine" 00:06:35.312 } 00:06:35.312 ] 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "subsystem": "scsi", 00:06:35.312 "config": null 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "subsystem": "scheduler", 00:06:35.312 "config": [ 00:06:35.312 { 00:06:35.312 "method": "framework_set_scheduler", 00:06:35.312 "params": { 00:06:35.312 "name": "static" 00:06:35.312 } 00:06:35.312 } 00:06:35.312 ] 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "subsystem": "vhost_scsi", 00:06:35.312 "config": [] 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "subsystem": "vhost_blk", 00:06:35.312 "config": [] 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "subsystem": "ublk", 00:06:35.312 "config": [] 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "subsystem": "nbd", 00:06:35.312 "config": [] 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "subsystem": "nvmf", 00:06:35.312 "config": [ 00:06:35.312 { 00:06:35.312 "method": "nvmf_set_config", 00:06:35.312 "params": { 00:06:35.312 "discovery_filter": "match_any", 00:06:35.312 "admin_cmd_passthru": { 00:06:35.312 "identify_ctrlr": false 00:06:35.312 }, 00:06:35.312 "dhchap_digests": [ 00:06:35.312 "sha256", 00:06:35.312 "sha384", 00:06:35.312 "sha512" 00:06:35.312 ], 00:06:35.312 "dhchap_dhgroups": [ 00:06:35.312 "null", 00:06:35.312 "ffdhe2048", 00:06:35.312 "ffdhe3072", 00:06:35.312 "ffdhe4096", 00:06:35.312 "ffdhe6144", 00:06:35.312 "ffdhe8192" 00:06:35.312 ] 00:06:35.312 } 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "method": "nvmf_set_max_subsystems", 00:06:35.312 "params": { 00:06:35.312 "max_subsystems": 1024 00:06:35.312 } 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "method": "nvmf_set_crdt", 00:06:35.312 "params": { 00:06:35.312 "crdt1": 0, 00:06:35.312 "crdt2": 0, 00:06:35.312 "crdt3": 0 00:06:35.312 } 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "method": "nvmf_create_transport", 00:06:35.312 "params": { 00:06:35.312 "trtype": "TCP", 00:06:35.312 "max_queue_depth": 128, 00:06:35.312 "max_io_qpairs_per_ctrlr": 127, 00:06:35.312 "in_capsule_data_size": 4096, 00:06:35.312 "max_io_size": 131072, 00:06:35.312 "io_unit_size": 131072, 00:06:35.312 "max_aq_depth": 128, 00:06:35.312 "num_shared_buffers": 511, 00:06:35.312 "buf_cache_size": 4294967295, 00:06:35.312 "dif_insert_or_strip": false, 00:06:35.312 "zcopy": false, 00:06:35.312 "c2h_success": true, 00:06:35.312 "sock_priority": 0, 00:06:35.312 "abort_timeout_sec": 1, 00:06:35.312 "ack_timeout": 0, 00:06:35.312 "data_wr_pool_size": 0 00:06:35.312 } 00:06:35.312 } 00:06:35.312 ] 00:06:35.312 }, 00:06:35.312 { 00:06:35.312 "subsystem": "iscsi", 00:06:35.312 "config": [ 00:06:35.312 { 00:06:35.312 "method": "iscsi_set_options", 00:06:35.312 "params": { 00:06:35.312 "node_base": "iqn.2016-06.io.spdk", 00:06:35.312 "max_sessions": 128, 00:06:35.312 "max_connections_per_session": 2, 00:06:35.312 "max_queue_depth": 64, 00:06:35.312 "default_time2wait": 2, 00:06:35.312 "default_time2retain": 20, 00:06:35.312 "first_burst_length": 8192, 00:06:35.312 "immediate_data": true, 00:06:35.312 "allow_duplicated_isid": false, 00:06:35.312 "error_recovery_level": 0, 00:06:35.312 "nop_timeout": 60, 00:06:35.312 "nop_in_interval": 30, 00:06:35.312 "disable_chap": false, 00:06:35.312 "require_chap": false, 00:06:35.312 "mutual_chap": false, 00:06:35.312 "chap_group": 0, 00:06:35.313 "max_large_datain_per_connection": 64, 00:06:35.313 "max_r2t_per_connection": 4, 00:06:35.313 "pdu_pool_size": 36864, 00:06:35.313 "immediate_data_pool_size": 16384, 00:06:35.313 "data_out_pool_size": 2048 00:06:35.313 } 00:06:35.313 } 00:06:35.313 ] 00:06:35.313 } 00:06:35.313 ] 00:06:35.313 } 00:06:35.313 02:20:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:35.313 02:20:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69283 00:06:35.313 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69283 ']' 00:06:35.313 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69283 00:06:35.313 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:35.313 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.313 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69283 00:06:35.313 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.313 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.313 killing process with pid 69283 00:06:35.313 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69283' 00:06:35.313 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69283 00:06:35.313 02:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69283 00:06:35.882 02:20:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69306 00:06:35.882 02:20:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:35.882 02:20:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69306 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69306 ']' 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69306 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69306 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.188 killing process with pid 69306 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69306' 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69306 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69306 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:41.188 00:06:41.188 real 0m6.998s 00:06:41.188 user 0m6.536s 00:06:41.188 sys 0m0.763s 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:41.188 ************************************ 00:06:41.188 END TEST skip_rpc_with_json 00:06:41.188 ************************************ 00:06:41.188 02:20:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:41.188 02:20:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.188 02:20:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.188 02:20:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.188 ************************************ 00:06:41.188 START TEST skip_rpc_with_delay 00:06:41.188 ************************************ 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:41.188 02:20:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:41.448 [2024-10-13 02:20:59.878136] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:41.448 [2024-10-13 02:20:59.878255] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:41.448 02:20:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:41.448 02:20:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.448 02:20:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:41.448 02:20:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.448 00:06:41.448 real 0m0.163s 00:06:41.448 user 0m0.086s 00:06:41.448 sys 0m0.076s 00:06:41.448 02:20:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.448 02:20:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:41.448 ************************************ 00:06:41.448 END TEST skip_rpc_with_delay 00:06:41.448 ************************************ 00:06:41.448 02:20:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:41.448 02:21:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:41.448 02:21:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:41.448 02:21:00 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.448 02:21:00 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.448 02:21:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.448 ************************************ 00:06:41.448 START TEST exit_on_failed_rpc_init 00:06:41.448 ************************************ 00:06:41.448 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:41.448 02:21:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69423 00:06:41.448 02:21:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.448 02:21:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69423 00:06:41.448 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69423 ']' 00:06:41.448 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.448 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.448 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.448 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.448 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:41.448 [2024-10-13 02:21:00.104466] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:41.448 [2024-10-13 02:21:00.104623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69423 ] 00:06:41.708 [2024-10-13 02:21:00.249738] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.708 [2024-10-13 02:21:00.297452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.279 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.279 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:42.279 02:21:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:42.279 02:21:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:42.279 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:42.279 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:42.279 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.279 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.279 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.279 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.279 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.279 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.279 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.279 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:42.279 02:21:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:42.538 [2024-10-13 02:21:01.028604] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:42.538 [2024-10-13 02:21:01.028743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69441 ] 00:06:42.538 [2024-10-13 02:21:01.175149] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.798 [2024-10-13 02:21:01.222228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.798 [2024-10-13 02:21:01.222350] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:42.798 [2024-10-13 02:21:01.222367] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:42.798 [2024-10-13 02:21:01.222378] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.798 02:21:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:42.798 02:21:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.798 02:21:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:42.798 02:21:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:42.798 02:21:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:42.798 02:21:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.798 02:21:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:42.798 02:21:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69423 00:06:42.798 02:21:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69423 ']' 00:06:42.798 02:21:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69423 00:06:42.798 02:21:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:42.798 02:21:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.798 02:21:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69423 00:06:42.798 killing process with pid 69423 00:06:42.798 02:21:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.798 02:21:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.798 02:21:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69423' 00:06:42.799 02:21:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69423 00:06:42.799 02:21:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69423 00:06:43.369 00:06:43.369 real 0m1.764s 00:06:43.369 user 0m1.902s 00:06:43.369 sys 0m0.505s 00:06:43.369 02:21:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.369 02:21:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:43.369 ************************************ 00:06:43.369 END TEST exit_on_failed_rpc_init 00:06:43.369 ************************************ 00:06:43.369 02:21:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:43.369 ************************************ 00:06:43.369 END TEST skip_rpc 00:06:43.369 ************************************ 00:06:43.369 00:06:43.369 real 0m14.898s 00:06:43.369 user 0m13.787s 00:06:43.369 sys 0m1.998s 00:06:43.369 02:21:01 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.369 02:21:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.369 02:21:01 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:43.369 02:21:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.369 02:21:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.369 02:21:01 -- common/autotest_common.sh@10 -- # set +x 00:06:43.369 ************************************ 00:06:43.369 START TEST rpc_client 00:06:43.369 ************************************ 00:06:43.369 02:21:01 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:43.369 * Looking for test storage... 00:06:43.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:43.369 02:21:02 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:43.370 02:21:02 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:43.370 02:21:02 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:43.630 02:21:02 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.630 02:21:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:43.630 02:21:02 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.630 02:21:02 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:43.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.630 --rc genhtml_branch_coverage=1 00:06:43.630 --rc genhtml_function_coverage=1 00:06:43.630 --rc genhtml_legend=1 00:06:43.630 --rc geninfo_all_blocks=1 00:06:43.630 --rc geninfo_unexecuted_blocks=1 00:06:43.630 00:06:43.630 ' 00:06:43.630 02:21:02 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:43.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.630 --rc genhtml_branch_coverage=1 00:06:43.630 --rc genhtml_function_coverage=1 00:06:43.630 --rc genhtml_legend=1 00:06:43.630 --rc geninfo_all_blocks=1 00:06:43.630 --rc geninfo_unexecuted_blocks=1 00:06:43.630 00:06:43.630 ' 00:06:43.630 02:21:02 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:43.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.630 --rc genhtml_branch_coverage=1 00:06:43.630 --rc genhtml_function_coverage=1 00:06:43.630 --rc genhtml_legend=1 00:06:43.630 --rc geninfo_all_blocks=1 00:06:43.630 --rc geninfo_unexecuted_blocks=1 00:06:43.630 00:06:43.630 ' 00:06:43.630 02:21:02 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:43.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.630 --rc genhtml_branch_coverage=1 00:06:43.630 --rc genhtml_function_coverage=1 00:06:43.630 --rc genhtml_legend=1 00:06:43.630 --rc geninfo_all_blocks=1 00:06:43.630 --rc geninfo_unexecuted_blocks=1 00:06:43.630 00:06:43.630 ' 00:06:43.630 02:21:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:43.630 OK 00:06:43.630 02:21:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:43.630 00:06:43.630 real 0m0.293s 00:06:43.630 user 0m0.174s 00:06:43.630 sys 0m0.134s 00:06:43.630 02:21:02 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.630 ************************************ 00:06:43.630 END TEST rpc_client 00:06:43.630 ************************************ 00:06:43.630 02:21:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:43.630 02:21:02 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:43.630 02:21:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.630 02:21:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.630 02:21:02 -- common/autotest_common.sh@10 -- # set +x 00:06:43.630 ************************************ 00:06:43.630 START TEST json_config 00:06:43.630 ************************************ 00:06:43.630 02:21:02 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:43.891 02:21:02 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:43.891 02:21:02 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:43.891 02:21:02 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:43.891 02:21:02 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:43.891 02:21:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.891 02:21:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.891 02:21:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.891 02:21:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.891 02:21:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.891 02:21:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.891 02:21:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.891 02:21:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.891 02:21:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.891 02:21:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.891 02:21:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.891 02:21:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:43.891 02:21:02 json_config -- scripts/common.sh@345 -- # : 1 00:06:43.891 02:21:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.891 02:21:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.891 02:21:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:43.891 02:21:02 json_config -- scripts/common.sh@353 -- # local d=1 00:06:43.891 02:21:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.891 02:21:02 json_config -- scripts/common.sh@355 -- # echo 1 00:06:43.891 02:21:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.891 02:21:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:43.891 02:21:02 json_config -- scripts/common.sh@353 -- # local d=2 00:06:43.891 02:21:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.891 02:21:02 json_config -- scripts/common.sh@355 -- # echo 2 00:06:43.891 02:21:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.891 02:21:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.891 02:21:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.891 02:21:02 json_config -- scripts/common.sh@368 -- # return 0 00:06:43.891 02:21:02 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.891 02:21:02 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:43.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.891 --rc genhtml_branch_coverage=1 00:06:43.891 --rc genhtml_function_coverage=1 00:06:43.891 --rc genhtml_legend=1 00:06:43.891 --rc geninfo_all_blocks=1 00:06:43.891 --rc geninfo_unexecuted_blocks=1 00:06:43.891 00:06:43.891 ' 00:06:43.891 02:21:02 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:43.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.891 --rc genhtml_branch_coverage=1 00:06:43.891 --rc genhtml_function_coverage=1 00:06:43.891 --rc genhtml_legend=1 00:06:43.891 --rc geninfo_all_blocks=1 00:06:43.891 --rc geninfo_unexecuted_blocks=1 00:06:43.891 00:06:43.891 ' 00:06:43.891 02:21:02 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:43.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.891 --rc genhtml_branch_coverage=1 00:06:43.891 --rc genhtml_function_coverage=1 00:06:43.891 --rc genhtml_legend=1 00:06:43.891 --rc geninfo_all_blocks=1 00:06:43.891 --rc geninfo_unexecuted_blocks=1 00:06:43.891 00:06:43.891 ' 00:06:43.891 02:21:02 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:43.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.891 --rc genhtml_branch_coverage=1 00:06:43.891 --rc genhtml_function_coverage=1 00:06:43.891 --rc genhtml_legend=1 00:06:43.891 --rc geninfo_all_blocks=1 00:06:43.891 --rc geninfo_unexecuted_blocks=1 00:06:43.891 00:06:43.891 ' 00:06:43.891 02:21:02 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b3f98cd3-51b2-436d-a29d-feb56f34e045 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b3f98cd3-51b2-436d-a29d-feb56f34e045 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.892 02:21:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.892 02:21:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.892 02:21:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.892 02:21:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.892 02:21:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.892 02:21:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.892 02:21:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.892 02:21:02 json_config -- paths/export.sh@5 -- # export PATH 00:06:43.892 02:21:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@51 -- # : 0 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:43.892 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:43.892 02:21:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:43.892 02:21:02 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:43.892 02:21:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:43.892 02:21:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:43.892 02:21:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:43.892 02:21:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:43.892 02:21:02 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:43.892 WARNING: No tests are enabled so not running JSON configuration tests 00:06:43.892 02:21:02 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:43.892 00:06:43.892 real 0m0.234s 00:06:43.892 user 0m0.147s 00:06:43.892 sys 0m0.091s 00:06:43.892 02:21:02 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.892 02:21:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.892 ************************************ 00:06:43.892 END TEST json_config 00:06:43.892 ************************************ 00:06:43.892 02:21:02 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:43.892 02:21:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.892 02:21:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.892 02:21:02 -- common/autotest_common.sh@10 -- # set +x 00:06:43.892 ************************************ 00:06:43.892 START TEST json_config_extra_key 00:06:43.892 ************************************ 00:06:43.892 02:21:02 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:44.153 02:21:02 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:44.153 02:21:02 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:44.153 02:21:02 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:44.153 02:21:02 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.153 02:21:02 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:44.153 02:21:02 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.153 02:21:02 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:44.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.153 --rc genhtml_branch_coverage=1 00:06:44.153 --rc genhtml_function_coverage=1 00:06:44.153 --rc genhtml_legend=1 00:06:44.153 --rc geninfo_all_blocks=1 00:06:44.153 --rc geninfo_unexecuted_blocks=1 00:06:44.153 00:06:44.153 ' 00:06:44.153 02:21:02 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:44.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.153 --rc genhtml_branch_coverage=1 00:06:44.153 --rc genhtml_function_coverage=1 00:06:44.153 --rc genhtml_legend=1 00:06:44.153 --rc geninfo_all_blocks=1 00:06:44.153 --rc geninfo_unexecuted_blocks=1 00:06:44.153 00:06:44.153 ' 00:06:44.153 02:21:02 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:44.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.153 --rc genhtml_branch_coverage=1 00:06:44.153 --rc genhtml_function_coverage=1 00:06:44.153 --rc genhtml_legend=1 00:06:44.153 --rc geninfo_all_blocks=1 00:06:44.153 --rc geninfo_unexecuted_blocks=1 00:06:44.153 00:06:44.153 ' 00:06:44.153 02:21:02 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:44.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.153 --rc genhtml_branch_coverage=1 00:06:44.153 --rc genhtml_function_coverage=1 00:06:44.153 --rc genhtml_legend=1 00:06:44.153 --rc geninfo_all_blocks=1 00:06:44.153 --rc geninfo_unexecuted_blocks=1 00:06:44.153 00:06:44.153 ' 00:06:44.153 02:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:44.153 02:21:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:44.153 02:21:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.153 02:21:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.153 02:21:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b3f98cd3-51b2-436d-a29d-feb56f34e045 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b3f98cd3-51b2-436d-a29d-feb56f34e045 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:44.154 02:21:02 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.154 02:21:02 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.154 02:21:02 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.154 02:21:02 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.154 02:21:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.154 02:21:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.154 02:21:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.154 02:21:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:44.154 02:21:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.154 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.154 02:21:02 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.154 02:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:44.154 02:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:44.154 02:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:44.154 02:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:44.154 02:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:44.154 02:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:44.154 02:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:44.154 02:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:44.154 02:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:44.154 02:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:44.154 02:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:44.154 INFO: launching applications... 00:06:44.154 02:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:44.154 02:21:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:44.154 02:21:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:44.154 02:21:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:44.154 02:21:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:44.154 02:21:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:44.154 02:21:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:44.154 02:21:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:44.154 02:21:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69618 00:06:44.154 02:21:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:44.154 Waiting for target to run... 00:06:44.154 02:21:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69618 /var/tmp/spdk_tgt.sock 00:06:44.154 02:21:02 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69618 ']' 00:06:44.154 02:21:02 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:44.154 02:21:02 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:44.154 02:21:02 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.154 02:21:02 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:44.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:44.154 02:21:02 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.154 02:21:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:44.414 [2024-10-13 02:21:02.886763] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:44.414 [2024-10-13 02:21:02.887012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69618 ] 00:06:44.984 [2024-10-13 02:21:03.412056] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.984 [2024-10-13 02:21:03.444649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.244 02:21:03 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.244 00:06:45.244 INFO: shutting down applications... 00:06:45.244 02:21:03 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:45.244 02:21:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:45.244 02:21:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:45.244 02:21:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:45.244 02:21:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:45.244 02:21:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:45.244 02:21:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69618 ]] 00:06:45.245 02:21:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69618 00:06:45.245 02:21:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:45.245 02:21:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:45.245 02:21:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69618 00:06:45.245 02:21:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:45.814 02:21:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:45.814 02:21:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:45.814 02:21:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69618 00:06:45.814 02:21:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:45.814 02:21:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:45.815 02:21:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:45.815 02:21:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:45.815 SPDK target shutdown done 00:06:45.815 02:21:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:45.815 Success 00:06:45.815 00:06:45.815 real 0m1.659s 00:06:45.815 user 0m1.201s 00:06:45.815 sys 0m0.656s 00:06:45.815 02:21:04 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.815 02:21:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:45.815 ************************************ 00:06:45.815 END TEST json_config_extra_key 00:06:45.815 ************************************ 00:06:45.815 02:21:04 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:45.815 02:21:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.815 02:21:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.815 02:21:04 -- common/autotest_common.sh@10 -- # set +x 00:06:45.815 ************************************ 00:06:45.815 START TEST alias_rpc 00:06:45.815 ************************************ 00:06:45.815 02:21:04 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:45.815 * Looking for test storage... 00:06:45.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:45.815 02:21:04 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:45.815 02:21:04 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:45.815 02:21:04 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:46.075 02:21:04 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.075 02:21:04 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:46.075 02:21:04 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.075 02:21:04 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:46.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.075 --rc genhtml_branch_coverage=1 00:06:46.075 --rc genhtml_function_coverage=1 00:06:46.075 --rc genhtml_legend=1 00:06:46.075 --rc geninfo_all_blocks=1 00:06:46.075 --rc geninfo_unexecuted_blocks=1 00:06:46.075 00:06:46.075 ' 00:06:46.075 02:21:04 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:46.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.075 --rc genhtml_branch_coverage=1 00:06:46.075 --rc genhtml_function_coverage=1 00:06:46.075 --rc genhtml_legend=1 00:06:46.075 --rc geninfo_all_blocks=1 00:06:46.075 --rc geninfo_unexecuted_blocks=1 00:06:46.075 00:06:46.075 ' 00:06:46.075 02:21:04 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:46.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.075 --rc genhtml_branch_coverage=1 00:06:46.075 --rc genhtml_function_coverage=1 00:06:46.075 --rc genhtml_legend=1 00:06:46.075 --rc geninfo_all_blocks=1 00:06:46.075 --rc geninfo_unexecuted_blocks=1 00:06:46.075 00:06:46.075 ' 00:06:46.075 02:21:04 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:46.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.075 --rc genhtml_branch_coverage=1 00:06:46.075 --rc genhtml_function_coverage=1 00:06:46.075 --rc genhtml_legend=1 00:06:46.075 --rc geninfo_all_blocks=1 00:06:46.075 --rc geninfo_unexecuted_blocks=1 00:06:46.075 00:06:46.075 ' 00:06:46.075 02:21:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:46.075 02:21:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69697 00:06:46.075 02:21:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:46.075 02:21:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69697 00:06:46.075 02:21:04 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69697 ']' 00:06:46.076 02:21:04 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.076 02:21:04 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.076 02:21:04 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.076 02:21:04 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.076 02:21:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.076 [2024-10-13 02:21:04.618816] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:46.076 [2024-10-13 02:21:04.619008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69697 ] 00:06:46.336 [2024-10-13 02:21:04.765520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.336 [2024-10-13 02:21:04.814903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.906 02:21:05 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.906 02:21:05 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:46.906 02:21:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:47.165 02:21:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69697 00:06:47.165 02:21:05 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69697 ']' 00:06:47.165 02:21:05 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69697 00:06:47.165 02:21:05 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:47.165 02:21:05 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.165 02:21:05 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69697 00:06:47.165 killing process with pid 69697 00:06:47.165 02:21:05 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.165 02:21:05 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.165 02:21:05 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69697' 00:06:47.165 02:21:05 alias_rpc -- common/autotest_common.sh@969 -- # kill 69697 00:06:47.165 02:21:05 alias_rpc -- common/autotest_common.sh@974 -- # wait 69697 00:06:47.426 00:06:47.426 real 0m1.811s 00:06:47.426 user 0m1.827s 00:06:47.426 sys 0m0.525s 00:06:47.426 02:21:06 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.426 02:21:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.426 ************************************ 00:06:47.426 END TEST alias_rpc 00:06:47.426 ************************************ 00:06:47.686 02:21:06 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:47.686 02:21:06 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:47.686 02:21:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.686 02:21:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.686 02:21:06 -- common/autotest_common.sh@10 -- # set +x 00:06:47.686 ************************************ 00:06:47.686 START TEST spdkcli_tcp 00:06:47.686 ************************************ 00:06:47.686 02:21:06 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:47.686 * Looking for test storage... 00:06:47.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:47.686 02:21:06 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:47.686 02:21:06 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:47.686 02:21:06 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:47.686 02:21:06 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.686 02:21:06 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:47.686 02:21:06 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.686 02:21:06 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:47.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.686 --rc genhtml_branch_coverage=1 00:06:47.686 --rc genhtml_function_coverage=1 00:06:47.686 --rc genhtml_legend=1 00:06:47.686 --rc geninfo_all_blocks=1 00:06:47.686 --rc geninfo_unexecuted_blocks=1 00:06:47.686 00:06:47.686 ' 00:06:47.686 02:21:06 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:47.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.686 --rc genhtml_branch_coverage=1 00:06:47.686 --rc genhtml_function_coverage=1 00:06:47.686 --rc genhtml_legend=1 00:06:47.686 --rc geninfo_all_blocks=1 00:06:47.686 --rc geninfo_unexecuted_blocks=1 00:06:47.686 00:06:47.686 ' 00:06:47.686 02:21:06 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:47.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.686 --rc genhtml_branch_coverage=1 00:06:47.686 --rc genhtml_function_coverage=1 00:06:47.686 --rc genhtml_legend=1 00:06:47.686 --rc geninfo_all_blocks=1 00:06:47.686 --rc geninfo_unexecuted_blocks=1 00:06:47.686 00:06:47.686 ' 00:06:47.686 02:21:06 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:47.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.686 --rc genhtml_branch_coverage=1 00:06:47.686 --rc genhtml_function_coverage=1 00:06:47.686 --rc genhtml_legend=1 00:06:47.686 --rc geninfo_all_blocks=1 00:06:47.686 --rc geninfo_unexecuted_blocks=1 00:06:47.686 00:06:47.686 ' 00:06:47.686 02:21:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:47.946 02:21:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:47.946 02:21:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:47.946 02:21:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:47.946 02:21:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:47.946 02:21:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:47.946 02:21:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:47.946 02:21:06 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:47.946 02:21:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.946 02:21:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69782 00:06:47.946 02:21:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:47.946 02:21:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69782 00:06:47.946 02:21:06 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 69782 ']' 00:06:47.946 02:21:06 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.946 02:21:06 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.946 02:21:06 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.946 02:21:06 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.946 02:21:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.946 [2024-10-13 02:21:06.468390] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:47.946 [2024-10-13 02:21:06.468613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69782 ] 00:06:47.946 [2024-10-13 02:21:06.616129] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.206 [2024-10-13 02:21:06.667106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.206 [2024-10-13 02:21:06.667222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.776 02:21:07 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.776 02:21:07 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:48.776 02:21:07 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69799 00:06:48.776 02:21:07 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:48.776 02:21:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:48.776 [ 00:06:48.776 "bdev_malloc_delete", 00:06:48.776 "bdev_malloc_create", 00:06:48.776 "bdev_null_resize", 00:06:48.776 "bdev_null_delete", 00:06:48.776 "bdev_null_create", 00:06:48.776 "bdev_nvme_cuse_unregister", 00:06:48.776 "bdev_nvme_cuse_register", 00:06:48.776 "bdev_opal_new_user", 00:06:48.776 "bdev_opal_set_lock_state", 00:06:48.776 "bdev_opal_delete", 00:06:48.776 "bdev_opal_get_info", 00:06:48.776 "bdev_opal_create", 00:06:48.776 "bdev_nvme_opal_revert", 00:06:48.776 "bdev_nvme_opal_init", 00:06:48.776 "bdev_nvme_send_cmd", 00:06:48.776 "bdev_nvme_set_keys", 00:06:48.776 "bdev_nvme_get_path_iostat", 00:06:48.776 "bdev_nvme_get_mdns_discovery_info", 00:06:48.776 "bdev_nvme_stop_mdns_discovery", 00:06:48.776 "bdev_nvme_start_mdns_discovery", 00:06:48.776 "bdev_nvme_set_multipath_policy", 00:06:48.776 "bdev_nvme_set_preferred_path", 00:06:48.776 "bdev_nvme_get_io_paths", 00:06:48.776 "bdev_nvme_remove_error_injection", 00:06:48.776 "bdev_nvme_add_error_injection", 00:06:48.776 "bdev_nvme_get_discovery_info", 00:06:48.776 "bdev_nvme_stop_discovery", 00:06:48.776 "bdev_nvme_start_discovery", 00:06:48.776 "bdev_nvme_get_controller_health_info", 00:06:48.776 "bdev_nvme_disable_controller", 00:06:48.776 "bdev_nvme_enable_controller", 00:06:48.776 "bdev_nvme_reset_controller", 00:06:48.776 "bdev_nvme_get_transport_statistics", 00:06:48.776 "bdev_nvme_apply_firmware", 00:06:48.776 "bdev_nvme_detach_controller", 00:06:48.776 "bdev_nvme_get_controllers", 00:06:48.776 "bdev_nvme_attach_controller", 00:06:48.776 "bdev_nvme_set_hotplug", 00:06:48.776 "bdev_nvme_set_options", 00:06:48.776 "bdev_passthru_delete", 00:06:48.776 "bdev_passthru_create", 00:06:48.776 "bdev_lvol_set_parent_bdev", 00:06:48.776 "bdev_lvol_set_parent", 00:06:48.776 "bdev_lvol_check_shallow_copy", 00:06:48.776 "bdev_lvol_start_shallow_copy", 00:06:48.776 "bdev_lvol_grow_lvstore", 00:06:48.776 "bdev_lvol_get_lvols", 00:06:48.776 "bdev_lvol_get_lvstores", 00:06:48.776 "bdev_lvol_delete", 00:06:48.776 "bdev_lvol_set_read_only", 00:06:48.776 "bdev_lvol_resize", 00:06:48.776 "bdev_lvol_decouple_parent", 00:06:48.776 "bdev_lvol_inflate", 00:06:48.776 "bdev_lvol_rename", 00:06:48.776 "bdev_lvol_clone_bdev", 00:06:48.776 "bdev_lvol_clone", 00:06:48.776 "bdev_lvol_snapshot", 00:06:48.776 "bdev_lvol_create", 00:06:48.776 "bdev_lvol_delete_lvstore", 00:06:48.776 "bdev_lvol_rename_lvstore", 00:06:48.776 "bdev_lvol_create_lvstore", 00:06:48.776 "bdev_raid_set_options", 00:06:48.776 "bdev_raid_remove_base_bdev", 00:06:48.776 "bdev_raid_add_base_bdev", 00:06:48.776 "bdev_raid_delete", 00:06:48.776 "bdev_raid_create", 00:06:48.776 "bdev_raid_get_bdevs", 00:06:48.776 "bdev_error_inject_error", 00:06:48.776 "bdev_error_delete", 00:06:48.776 "bdev_error_create", 00:06:48.776 "bdev_split_delete", 00:06:48.776 "bdev_split_create", 00:06:48.776 "bdev_delay_delete", 00:06:48.776 "bdev_delay_create", 00:06:48.776 "bdev_delay_update_latency", 00:06:48.776 "bdev_zone_block_delete", 00:06:48.776 "bdev_zone_block_create", 00:06:48.776 "blobfs_create", 00:06:48.776 "blobfs_detect", 00:06:48.776 "blobfs_set_cache_size", 00:06:48.776 "bdev_aio_delete", 00:06:48.776 "bdev_aio_rescan", 00:06:48.776 "bdev_aio_create", 00:06:48.776 "bdev_ftl_set_property", 00:06:48.776 "bdev_ftl_get_properties", 00:06:48.776 "bdev_ftl_get_stats", 00:06:48.776 "bdev_ftl_unmap", 00:06:48.776 "bdev_ftl_unload", 00:06:48.776 "bdev_ftl_delete", 00:06:48.776 "bdev_ftl_load", 00:06:48.776 "bdev_ftl_create", 00:06:48.776 "bdev_virtio_attach_controller", 00:06:48.776 "bdev_virtio_scsi_get_devices", 00:06:48.776 "bdev_virtio_detach_controller", 00:06:48.776 "bdev_virtio_blk_set_hotplug", 00:06:48.776 "bdev_iscsi_delete", 00:06:48.776 "bdev_iscsi_create", 00:06:48.776 "bdev_iscsi_set_options", 00:06:48.776 "accel_error_inject_error", 00:06:48.776 "ioat_scan_accel_module", 00:06:48.776 "dsa_scan_accel_module", 00:06:48.776 "iaa_scan_accel_module", 00:06:48.776 "keyring_file_remove_key", 00:06:48.776 "keyring_file_add_key", 00:06:48.776 "keyring_linux_set_options", 00:06:48.776 "fsdev_aio_delete", 00:06:48.776 "fsdev_aio_create", 00:06:48.776 "iscsi_get_histogram", 00:06:48.776 "iscsi_enable_histogram", 00:06:48.776 "iscsi_set_options", 00:06:48.776 "iscsi_get_auth_groups", 00:06:48.776 "iscsi_auth_group_remove_secret", 00:06:48.776 "iscsi_auth_group_add_secret", 00:06:48.776 "iscsi_delete_auth_group", 00:06:48.776 "iscsi_create_auth_group", 00:06:48.776 "iscsi_set_discovery_auth", 00:06:48.776 "iscsi_get_options", 00:06:48.776 "iscsi_target_node_request_logout", 00:06:48.776 "iscsi_target_node_set_redirect", 00:06:48.776 "iscsi_target_node_set_auth", 00:06:48.776 "iscsi_target_node_add_lun", 00:06:48.776 "iscsi_get_stats", 00:06:48.776 "iscsi_get_connections", 00:06:48.776 "iscsi_portal_group_set_auth", 00:06:48.776 "iscsi_start_portal_group", 00:06:48.776 "iscsi_delete_portal_group", 00:06:48.776 "iscsi_create_portal_group", 00:06:48.776 "iscsi_get_portal_groups", 00:06:48.776 "iscsi_delete_target_node", 00:06:48.776 "iscsi_target_node_remove_pg_ig_maps", 00:06:48.776 "iscsi_target_node_add_pg_ig_maps", 00:06:48.776 "iscsi_create_target_node", 00:06:48.776 "iscsi_get_target_nodes", 00:06:48.776 "iscsi_delete_initiator_group", 00:06:48.776 "iscsi_initiator_group_remove_initiators", 00:06:48.776 "iscsi_initiator_group_add_initiators", 00:06:48.776 "iscsi_create_initiator_group", 00:06:48.776 "iscsi_get_initiator_groups", 00:06:48.776 "nvmf_set_crdt", 00:06:48.776 "nvmf_set_config", 00:06:48.776 "nvmf_set_max_subsystems", 00:06:48.776 "nvmf_stop_mdns_prr", 00:06:48.776 "nvmf_publish_mdns_prr", 00:06:48.776 "nvmf_subsystem_get_listeners", 00:06:48.776 "nvmf_subsystem_get_qpairs", 00:06:48.776 "nvmf_subsystem_get_controllers", 00:06:48.776 "nvmf_get_stats", 00:06:48.776 "nvmf_get_transports", 00:06:48.776 "nvmf_create_transport", 00:06:48.776 "nvmf_get_targets", 00:06:48.776 "nvmf_delete_target", 00:06:48.776 "nvmf_create_target", 00:06:48.776 "nvmf_subsystem_allow_any_host", 00:06:48.776 "nvmf_subsystem_set_keys", 00:06:48.776 "nvmf_subsystem_remove_host", 00:06:48.776 "nvmf_subsystem_add_host", 00:06:48.776 "nvmf_ns_remove_host", 00:06:48.776 "nvmf_ns_add_host", 00:06:48.776 "nvmf_subsystem_remove_ns", 00:06:48.776 "nvmf_subsystem_set_ns_ana_group", 00:06:48.776 "nvmf_subsystem_add_ns", 00:06:48.776 "nvmf_subsystem_listener_set_ana_state", 00:06:48.776 "nvmf_discovery_get_referrals", 00:06:48.776 "nvmf_discovery_remove_referral", 00:06:48.776 "nvmf_discovery_add_referral", 00:06:48.776 "nvmf_subsystem_remove_listener", 00:06:48.776 "nvmf_subsystem_add_listener", 00:06:48.776 "nvmf_delete_subsystem", 00:06:48.776 "nvmf_create_subsystem", 00:06:48.776 "nvmf_get_subsystems", 00:06:48.776 "env_dpdk_get_mem_stats", 00:06:48.776 "nbd_get_disks", 00:06:48.776 "nbd_stop_disk", 00:06:48.776 "nbd_start_disk", 00:06:48.776 "ublk_recover_disk", 00:06:48.776 "ublk_get_disks", 00:06:48.776 "ublk_stop_disk", 00:06:48.776 "ublk_start_disk", 00:06:48.776 "ublk_destroy_target", 00:06:48.776 "ublk_create_target", 00:06:48.776 "virtio_blk_create_transport", 00:06:48.776 "virtio_blk_get_transports", 00:06:48.776 "vhost_controller_set_coalescing", 00:06:48.776 "vhost_get_controllers", 00:06:48.776 "vhost_delete_controller", 00:06:48.776 "vhost_create_blk_controller", 00:06:48.776 "vhost_scsi_controller_remove_target", 00:06:48.776 "vhost_scsi_controller_add_target", 00:06:48.776 "vhost_start_scsi_controller", 00:06:48.776 "vhost_create_scsi_controller", 00:06:48.776 "thread_set_cpumask", 00:06:48.776 "scheduler_set_options", 00:06:48.777 "framework_get_governor", 00:06:48.777 "framework_get_scheduler", 00:06:48.777 "framework_set_scheduler", 00:06:48.777 "framework_get_reactors", 00:06:48.777 "thread_get_io_channels", 00:06:48.777 "thread_get_pollers", 00:06:48.777 "thread_get_stats", 00:06:48.777 "framework_monitor_context_switch", 00:06:48.777 "spdk_kill_instance", 00:06:48.777 "log_enable_timestamps", 00:06:48.777 "log_get_flags", 00:06:48.777 "log_clear_flag", 00:06:48.777 "log_set_flag", 00:06:48.777 "log_get_level", 00:06:48.777 "log_set_level", 00:06:48.777 "log_get_print_level", 00:06:48.777 "log_set_print_level", 00:06:48.777 "framework_enable_cpumask_locks", 00:06:48.777 "framework_disable_cpumask_locks", 00:06:48.777 "framework_wait_init", 00:06:48.777 "framework_start_init", 00:06:48.777 "scsi_get_devices", 00:06:48.777 "bdev_get_histogram", 00:06:48.777 "bdev_enable_histogram", 00:06:48.777 "bdev_set_qos_limit", 00:06:48.777 "bdev_set_qd_sampling_period", 00:06:48.777 "bdev_get_bdevs", 00:06:48.777 "bdev_reset_iostat", 00:06:48.777 "bdev_get_iostat", 00:06:48.777 "bdev_examine", 00:06:48.777 "bdev_wait_for_examine", 00:06:48.777 "bdev_set_options", 00:06:48.777 "accel_get_stats", 00:06:48.777 "accel_set_options", 00:06:48.777 "accel_set_driver", 00:06:48.777 "accel_crypto_key_destroy", 00:06:48.777 "accel_crypto_keys_get", 00:06:48.777 "accel_crypto_key_create", 00:06:48.777 "accel_assign_opc", 00:06:48.777 "accel_get_module_info", 00:06:48.777 "accel_get_opc_assignments", 00:06:48.777 "vmd_rescan", 00:06:48.777 "vmd_remove_device", 00:06:48.777 "vmd_enable", 00:06:48.777 "sock_get_default_impl", 00:06:48.777 "sock_set_default_impl", 00:06:48.777 "sock_impl_set_options", 00:06:48.777 "sock_impl_get_options", 00:06:48.777 "iobuf_get_stats", 00:06:48.777 "iobuf_set_options", 00:06:48.777 "keyring_get_keys", 00:06:48.777 "framework_get_pci_devices", 00:06:48.777 "framework_get_config", 00:06:48.777 "framework_get_subsystems", 00:06:48.777 "fsdev_set_opts", 00:06:48.777 "fsdev_get_opts", 00:06:48.777 "trace_get_info", 00:06:48.777 "trace_get_tpoint_group_mask", 00:06:48.777 "trace_disable_tpoint_group", 00:06:48.777 "trace_enable_tpoint_group", 00:06:48.777 "trace_clear_tpoint_mask", 00:06:48.777 "trace_set_tpoint_mask", 00:06:48.777 "notify_get_notifications", 00:06:48.777 "notify_get_types", 00:06:48.777 "spdk_get_version", 00:06:48.777 "rpc_get_methods" 00:06:48.777 ] 00:06:49.041 02:21:07 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:49.041 02:21:07 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:49.041 02:21:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.041 02:21:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:49.041 02:21:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69782 00:06:49.041 02:21:07 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 69782 ']' 00:06:49.042 02:21:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 69782 00:06:49.042 02:21:07 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:49.042 02:21:07 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.042 02:21:07 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69782 00:06:49.042 02:21:07 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.042 02:21:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.042 killing process with pid 69782 00:06:49.042 02:21:07 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69782' 00:06:49.042 02:21:07 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 69782 00:06:49.042 02:21:07 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 69782 00:06:49.316 ************************************ 00:06:49.316 END TEST spdkcli_tcp 00:06:49.316 ************************************ 00:06:49.316 00:06:49.316 real 0m1.801s 00:06:49.316 user 0m2.961s 00:06:49.316 sys 0m0.555s 00:06:49.316 02:21:07 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.316 02:21:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.591 02:21:08 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:49.591 02:21:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.591 02:21:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.591 02:21:08 -- common/autotest_common.sh@10 -- # set +x 00:06:49.591 ************************************ 00:06:49.591 START TEST dpdk_mem_utility 00:06:49.591 ************************************ 00:06:49.591 02:21:08 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:49.591 * Looking for test storage... 00:06:49.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:49.591 02:21:08 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:49.591 02:21:08 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:49.591 02:21:08 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:49.591 02:21:08 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.591 02:21:08 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:49.591 02:21:08 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.591 02:21:08 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:49.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.591 --rc genhtml_branch_coverage=1 00:06:49.591 --rc genhtml_function_coverage=1 00:06:49.591 --rc genhtml_legend=1 00:06:49.591 --rc geninfo_all_blocks=1 00:06:49.591 --rc geninfo_unexecuted_blocks=1 00:06:49.591 00:06:49.591 ' 00:06:49.591 02:21:08 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:49.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.591 --rc genhtml_branch_coverage=1 00:06:49.591 --rc genhtml_function_coverage=1 00:06:49.591 --rc genhtml_legend=1 00:06:49.591 --rc geninfo_all_blocks=1 00:06:49.591 --rc geninfo_unexecuted_blocks=1 00:06:49.591 00:06:49.591 ' 00:06:49.591 02:21:08 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:49.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.591 --rc genhtml_branch_coverage=1 00:06:49.591 --rc genhtml_function_coverage=1 00:06:49.591 --rc genhtml_legend=1 00:06:49.591 --rc geninfo_all_blocks=1 00:06:49.591 --rc geninfo_unexecuted_blocks=1 00:06:49.591 00:06:49.591 ' 00:06:49.591 02:21:08 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:49.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.591 --rc genhtml_branch_coverage=1 00:06:49.591 --rc genhtml_function_coverage=1 00:06:49.591 --rc genhtml_legend=1 00:06:49.591 --rc geninfo_all_blocks=1 00:06:49.591 --rc geninfo_unexecuted_blocks=1 00:06:49.591 00:06:49.591 ' 00:06:49.591 02:21:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:49.591 02:21:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69882 00:06:49.591 02:21:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:49.591 02:21:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69882 00:06:49.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.591 02:21:08 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 69882 ']' 00:06:49.591 02:21:08 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.591 02:21:08 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.591 02:21:08 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.591 02:21:08 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.591 02:21:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:49.851 [2024-10-13 02:21:08.346071] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:49.851 [2024-10-13 02:21:08.346314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69882 ] 00:06:49.851 [2024-10-13 02:21:08.492153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.111 [2024-10-13 02:21:08.541974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.682 02:21:09 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.682 02:21:09 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:50.682 02:21:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:50.682 02:21:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:50.682 02:21:09 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.682 02:21:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:50.682 { 00:06:50.682 "filename": "/tmp/spdk_mem_dump.txt" 00:06:50.682 } 00:06:50.682 02:21:09 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.682 02:21:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:50.682 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:50.682 1 heaps totaling size 860.000000 MiB 00:06:50.682 size: 860.000000 MiB heap id: 0 00:06:50.682 end heaps---------- 00:06:50.682 9 mempools totaling size 642.649841 MiB 00:06:50.682 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:50.682 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:50.682 size: 92.545471 MiB name: bdev_io_69882 00:06:50.682 size: 51.011292 MiB name: evtpool_69882 00:06:50.682 size: 50.003479 MiB name: msgpool_69882 00:06:50.682 size: 36.509338 MiB name: fsdev_io_69882 00:06:50.682 size: 21.763794 MiB name: PDU_Pool 00:06:50.682 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:50.682 size: 0.026123 MiB name: Session_Pool 00:06:50.682 end mempools------- 00:06:50.682 6 memzones totaling size 4.142822 MiB 00:06:50.682 size: 1.000366 MiB name: RG_ring_0_69882 00:06:50.682 size: 1.000366 MiB name: RG_ring_1_69882 00:06:50.682 size: 1.000366 MiB name: RG_ring_4_69882 00:06:50.682 size: 1.000366 MiB name: RG_ring_5_69882 00:06:50.682 size: 0.125366 MiB name: RG_ring_2_69882 00:06:50.682 size: 0.015991 MiB name: RG_ring_3_69882 00:06:50.682 end memzones------- 00:06:50.682 02:21:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:50.682 heap id: 0 total size: 860.000000 MiB number of busy elements: 303 number of free elements: 16 00:06:50.682 list of free elements. size: 13.937256 MiB 00:06:50.682 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:50.682 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:50.682 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:50.682 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:50.682 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:50.682 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:50.682 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:50.682 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:50.682 element at address: 0x200000200000 with size: 0.834839 MiB 00:06:50.682 element at address: 0x20001d800000 with size: 0.568237 MiB 00:06:50.682 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:50.682 element at address: 0x200003e00000 with size: 0.488647 MiB 00:06:50.682 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:50.682 element at address: 0x200007000000 with size: 0.480469 MiB 00:06:50.682 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:50.682 element at address: 0x200003a00000 with size: 0.353027 MiB 00:06:50.683 list of standard malloc elements. size: 199.266052 MiB 00:06:50.683 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:50.683 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:50.683 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:50.683 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:50.683 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:50.683 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:50.683 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:50.683 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:50.683 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:50.683 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:50.683 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:50.683 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:50.683 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:50.684 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:50.684 list of memzone associated elements. size: 646.796692 MiB 00:06:50.684 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:50.684 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:50.684 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:50.684 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:50.684 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:50.684 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_69882_0 00:06:50.684 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:50.684 associated memzone info: size: 48.002930 MiB name: MP_evtpool_69882_0 00:06:50.684 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:50.684 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69882_0 00:06:50.684 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:50.684 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69882_0 00:06:50.685 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:50.685 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:50.685 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:50.685 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:50.685 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:50.685 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_69882 00:06:50.685 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:50.685 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69882 00:06:50.685 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:50.685 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69882 00:06:50.685 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:50.685 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:50.685 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:50.685 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:50.685 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:50.685 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:50.685 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:50.685 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:50.685 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:50.685 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69882 00:06:50.685 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:50.685 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69882 00:06:50.685 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:50.685 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69882 00:06:50.685 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:50.685 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69882 00:06:50.685 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:50.685 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69882 00:06:50.685 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:50.685 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69882 00:06:50.685 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:50.685 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:50.685 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:50.685 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:50.685 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:50.685 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:50.685 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:06:50.685 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69882 00:06:50.685 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:50.685 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:50.685 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:50.685 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:50.685 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:06:50.685 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69882 00:06:50.685 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:50.685 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:50.685 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:50.685 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69882 00:06:50.685 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:50.685 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69882 00:06:50.685 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:06:50.685 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69882 00:06:50.685 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:50.685 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:50.685 02:21:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:50.685 02:21:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69882 00:06:50.685 02:21:09 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 69882 ']' 00:06:50.685 02:21:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 69882 00:06:50.685 02:21:09 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:50.685 02:21:09 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.685 02:21:09 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69882 00:06:50.685 killing process with pid 69882 00:06:50.685 02:21:09 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.685 02:21:09 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.685 02:21:09 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69882' 00:06:50.685 02:21:09 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 69882 00:06:50.685 02:21:09 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 69882 00:06:51.254 ************************************ 00:06:51.254 END TEST dpdk_mem_utility 00:06:51.254 ************************************ 00:06:51.254 00:06:51.254 real 0m1.690s 00:06:51.254 user 0m1.610s 00:06:51.254 sys 0m0.525s 00:06:51.254 02:21:09 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.254 02:21:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:51.254 02:21:09 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:51.254 02:21:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.254 02:21:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.254 02:21:09 -- common/autotest_common.sh@10 -- # set +x 00:06:51.254 ************************************ 00:06:51.254 START TEST event 00:06:51.254 ************************************ 00:06:51.254 02:21:09 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:51.254 * Looking for test storage... 00:06:51.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:51.254 02:21:09 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:51.254 02:21:09 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:51.254 02:21:09 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:51.514 02:21:09 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:51.514 02:21:09 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.514 02:21:09 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.514 02:21:09 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.514 02:21:09 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.514 02:21:09 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.514 02:21:09 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.514 02:21:09 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.514 02:21:09 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.514 02:21:09 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.514 02:21:09 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.514 02:21:09 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.514 02:21:09 event -- scripts/common.sh@344 -- # case "$op" in 00:06:51.514 02:21:09 event -- scripts/common.sh@345 -- # : 1 00:06:51.514 02:21:09 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.514 02:21:09 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.514 02:21:09 event -- scripts/common.sh@365 -- # decimal 1 00:06:51.514 02:21:09 event -- scripts/common.sh@353 -- # local d=1 00:06:51.514 02:21:09 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.514 02:21:09 event -- scripts/common.sh@355 -- # echo 1 00:06:51.514 02:21:09 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.514 02:21:09 event -- scripts/common.sh@366 -- # decimal 2 00:06:51.514 02:21:09 event -- scripts/common.sh@353 -- # local d=2 00:06:51.514 02:21:09 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.514 02:21:09 event -- scripts/common.sh@355 -- # echo 2 00:06:51.514 02:21:09 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.514 02:21:09 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.514 02:21:09 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.514 02:21:09 event -- scripts/common.sh@368 -- # return 0 00:06:51.514 02:21:09 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.514 02:21:09 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:51.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.514 --rc genhtml_branch_coverage=1 00:06:51.514 --rc genhtml_function_coverage=1 00:06:51.514 --rc genhtml_legend=1 00:06:51.514 --rc geninfo_all_blocks=1 00:06:51.514 --rc geninfo_unexecuted_blocks=1 00:06:51.514 00:06:51.514 ' 00:06:51.514 02:21:09 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:51.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.514 --rc genhtml_branch_coverage=1 00:06:51.514 --rc genhtml_function_coverage=1 00:06:51.514 --rc genhtml_legend=1 00:06:51.514 --rc geninfo_all_blocks=1 00:06:51.514 --rc geninfo_unexecuted_blocks=1 00:06:51.514 00:06:51.514 ' 00:06:51.514 02:21:09 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:51.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.514 --rc genhtml_branch_coverage=1 00:06:51.514 --rc genhtml_function_coverage=1 00:06:51.514 --rc genhtml_legend=1 00:06:51.514 --rc geninfo_all_blocks=1 00:06:51.514 --rc geninfo_unexecuted_blocks=1 00:06:51.514 00:06:51.514 ' 00:06:51.514 02:21:09 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:51.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.514 --rc genhtml_branch_coverage=1 00:06:51.514 --rc genhtml_function_coverage=1 00:06:51.514 --rc genhtml_legend=1 00:06:51.514 --rc geninfo_all_blocks=1 00:06:51.514 --rc geninfo_unexecuted_blocks=1 00:06:51.514 00:06:51.514 ' 00:06:51.514 02:21:09 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:51.514 02:21:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:51.514 02:21:09 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:51.514 02:21:09 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:51.514 02:21:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.514 02:21:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.514 ************************************ 00:06:51.514 START TEST event_perf 00:06:51.514 ************************************ 00:06:51.514 02:21:10 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:51.514 Running I/O for 1 seconds...[2024-10-13 02:21:10.050710] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:51.514 [2024-10-13 02:21:10.050952] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69957 ] 00:06:51.514 [2024-10-13 02:21:10.184811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.774 [2024-10-13 02:21:10.235727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.774 Running I/O for 1 seconds...[2024-10-13 02:21:10.237042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.774 [2024-10-13 02:21:10.237036] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.774 [2024-10-13 02:21:10.237150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.713 00:06:52.713 lcore 0: 131861 00:06:52.713 lcore 1: 131858 00:06:52.713 lcore 2: 131861 00:06:52.713 lcore 3: 131861 00:06:52.713 done. 00:06:52.713 00:06:52.713 real 0m1.321s 00:06:52.713 user 0m4.100s 00:06:52.713 sys 0m0.099s 00:06:52.713 02:21:11 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.713 02:21:11 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:52.713 ************************************ 00:06:52.713 END TEST event_perf 00:06:52.713 ************************************ 00:06:52.713 02:21:11 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:52.713 02:21:11 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:52.713 02:21:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.713 02:21:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.713 ************************************ 00:06:52.713 START TEST event_reactor 00:06:52.713 ************************************ 00:06:52.972 02:21:11 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:52.972 [2024-10-13 02:21:11.440351] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:52.972 [2024-10-13 02:21:11.440498] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70002 ] 00:06:52.972 [2024-10-13 02:21:11.586125] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.972 [2024-10-13 02:21:11.630974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.353 test_start 00:06:54.353 oneshot 00:06:54.353 tick 100 00:06:54.353 tick 100 00:06:54.353 tick 250 00:06:54.353 tick 100 00:06:54.353 tick 100 00:06:54.353 tick 100 00:06:54.353 tick 250 00:06:54.353 tick 500 00:06:54.353 tick 100 00:06:54.353 tick 100 00:06:54.353 tick 250 00:06:54.353 tick 100 00:06:54.353 tick 100 00:06:54.353 test_end 00:06:54.353 00:06:54.353 real 0m1.326s 00:06:54.353 user 0m1.131s 00:06:54.353 sys 0m0.088s 00:06:54.353 02:21:12 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.353 ************************************ 00:06:54.353 END TEST event_reactor 00:06:54.353 ************************************ 00:06:54.353 02:21:12 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:54.353 02:21:12 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:54.353 02:21:12 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:54.353 02:21:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.353 02:21:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.353 ************************************ 00:06:54.353 START TEST event_reactor_perf 00:06:54.353 ************************************ 00:06:54.353 02:21:12 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:54.353 [2024-10-13 02:21:12.832074] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:54.353 [2024-10-13 02:21:12.832214] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70033 ] 00:06:54.353 [2024-10-13 02:21:12.977996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.353 [2024-10-13 02:21:13.026397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.733 test_start 00:06:55.733 test_end 00:06:55.733 Performance: 383524 events per second 00:06:55.733 00:06:55.733 real 0m1.327s 00:06:55.733 user 0m1.132s 00:06:55.733 sys 0m0.088s 00:06:55.733 ************************************ 00:06:55.733 END TEST event_reactor_perf 00:06:55.733 ************************************ 00:06:55.733 02:21:14 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.733 02:21:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.733 02:21:14 event -- event/event.sh@49 -- # uname -s 00:06:55.733 02:21:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:55.733 02:21:14 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:55.733 02:21:14 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.733 02:21:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.733 02:21:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.733 ************************************ 00:06:55.733 START TEST event_scheduler 00:06:55.733 ************************************ 00:06:55.733 02:21:14 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:55.733 * Looking for test storage... 00:06:55.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:55.733 02:21:14 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:55.733 02:21:14 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:55.733 02:21:14 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:55.733 02:21:14 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:55.733 02:21:14 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:55.734 02:21:14 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.993 02:21:14 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.993 02:21:14 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.993 02:21:14 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:55.993 02:21:14 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.993 02:21:14 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:55.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.993 --rc genhtml_branch_coverage=1 00:06:55.993 --rc genhtml_function_coverage=1 00:06:55.993 --rc genhtml_legend=1 00:06:55.993 --rc geninfo_all_blocks=1 00:06:55.993 --rc geninfo_unexecuted_blocks=1 00:06:55.993 00:06:55.993 ' 00:06:55.993 02:21:14 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:55.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.993 --rc genhtml_branch_coverage=1 00:06:55.993 --rc genhtml_function_coverage=1 00:06:55.993 --rc genhtml_legend=1 00:06:55.993 --rc geninfo_all_blocks=1 00:06:55.993 --rc geninfo_unexecuted_blocks=1 00:06:55.993 00:06:55.993 ' 00:06:55.993 02:21:14 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:55.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.993 --rc genhtml_branch_coverage=1 00:06:55.993 --rc genhtml_function_coverage=1 00:06:55.993 --rc genhtml_legend=1 00:06:55.993 --rc geninfo_all_blocks=1 00:06:55.993 --rc geninfo_unexecuted_blocks=1 00:06:55.993 00:06:55.993 ' 00:06:55.993 02:21:14 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:55.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.993 --rc genhtml_branch_coverage=1 00:06:55.993 --rc genhtml_function_coverage=1 00:06:55.993 --rc genhtml_legend=1 00:06:55.993 --rc geninfo_all_blocks=1 00:06:55.993 --rc geninfo_unexecuted_blocks=1 00:06:55.993 00:06:55.993 ' 00:06:55.993 02:21:14 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:55.993 02:21:14 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70104 00:06:55.993 02:21:14 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:55.993 02:21:14 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:55.993 02:21:14 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70104 00:06:55.993 02:21:14 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70104 ']' 00:06:55.993 02:21:14 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.994 02:21:14 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.994 02:21:14 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.994 02:21:14 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.994 02:21:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:55.994 [2024-10-13 02:21:14.497303] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:55.994 [2024-10-13 02:21:14.497428] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70104 ] 00:06:55.994 [2024-10-13 02:21:14.634815] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.253 [2024-10-13 02:21:14.686041] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.253 [2024-10-13 02:21:14.686340] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.253 [2024-10-13 02:21:14.686352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.253 [2024-10-13 02:21:14.686468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.824 02:21:15 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.824 02:21:15 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:56.824 02:21:15 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:56.824 02:21:15 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.824 02:21:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.824 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:56.824 POWER: Cannot set governor of lcore 0 to userspace 00:06:56.824 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:56.824 POWER: Cannot set governor of lcore 0 to performance 00:06:56.824 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:56.824 POWER: Cannot set governor of lcore 0 to userspace 00:06:56.824 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:56.824 POWER: Unable to set Power Management Environment for lcore 0 00:06:56.824 [2024-10-13 02:21:15.335373] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:56.824 [2024-10-13 02:21:15.335396] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:56.824 [2024-10-13 02:21:15.335432] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:56.824 [2024-10-13 02:21:15.335452] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:56.824 [2024-10-13 02:21:15.335460] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:56.824 [2024-10-13 02:21:15.335488] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:56.824 02:21:15 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.824 02:21:15 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:56.824 02:21:15 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.824 02:21:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.824 [2024-10-13 02:21:15.407408] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:56.824 02:21:15 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.824 02:21:15 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:56.824 02:21:15 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.824 02:21:15 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.824 02:21:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.824 ************************************ 00:06:56.824 START TEST scheduler_create_thread 00:06:56.824 ************************************ 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.824 2 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.824 3 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.824 4 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.824 5 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.824 6 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.824 7 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.824 8 00:06:56.824 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.825 02:21:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:56.825 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.825 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.085 9 00:06:57.085 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.085 02:21:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:57.085 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.085 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.085 10 00:06:57.085 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.085 02:21:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:57.085 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.085 02:21:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.465 02:21:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.465 02:21:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:58.465 02:21:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:58.465 02:21:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.465 02:21:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.034 02:21:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.034 02:21:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:59.034 02:21:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.034 02:21:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.975 02:21:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.975 02:21:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:59.975 02:21:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:59.975 02:21:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.975 02:21:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.915 ************************************ 00:07:00.915 END TEST scheduler_create_thread 00:07:00.915 ************************************ 00:07:00.915 02:21:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.915 00:07:00.915 real 0m3.877s 00:07:00.915 user 0m0.028s 00:07:00.915 sys 0m0.005s 00:07:00.915 02:21:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.915 02:21:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.915 02:21:19 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:00.915 02:21:19 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70104 00:07:00.915 02:21:19 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70104 ']' 00:07:00.915 02:21:19 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70104 00:07:00.915 02:21:19 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:00.915 02:21:19 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.915 02:21:19 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70104 00:07:00.915 killing process with pid 70104 00:07:00.915 02:21:19 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:00.915 02:21:19 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:00.915 02:21:19 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70104' 00:07:00.915 02:21:19 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70104 00:07:00.915 02:21:19 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70104 00:07:01.175 [2024-10-13 02:21:19.677996] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:01.436 ************************************ 00:07:01.436 END TEST event_scheduler 00:07:01.436 ************************************ 00:07:01.436 00:07:01.436 real 0m5.812s 00:07:01.436 user 0m12.035s 00:07:01.436 sys 0m0.474s 00:07:01.436 02:21:20 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.436 02:21:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:01.436 02:21:20 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:01.436 02:21:20 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:01.436 02:21:20 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.436 02:21:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.436 02:21:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:01.436 ************************************ 00:07:01.436 START TEST app_repeat 00:07:01.436 ************************************ 00:07:01.436 02:21:20 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:01.436 02:21:20 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.436 02:21:20 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.436 02:21:20 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:01.436 02:21:20 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.436 02:21:20 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:01.436 02:21:20 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:01.436 02:21:20 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:01.436 02:21:20 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70215 00:07:01.436 02:21:20 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:01.436 02:21:20 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:01.436 02:21:20 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70215' 00:07:01.436 Process app_repeat pid: 70215 00:07:01.436 02:21:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:01.436 02:21:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:01.436 spdk_app_start Round 0 00:07:01.436 02:21:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70215 /var/tmp/spdk-nbd.sock 00:07:01.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.436 02:21:20 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70215 ']' 00:07:01.436 02:21:20 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.436 02:21:20 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.436 02:21:20 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.436 02:21:20 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.436 02:21:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.696 [2024-10-13 02:21:20.139408] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:01.696 [2024-10-13 02:21:20.139636] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70215 ] 00:07:01.696 [2024-10-13 02:21:20.282840] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.696 [2024-10-13 02:21:20.328643] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.696 [2024-10-13 02:21:20.328780] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.635 02:21:21 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.635 02:21:21 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:02.635 02:21:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.635 Malloc0 00:07:02.635 02:21:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.895 Malloc1 00:07:02.895 02:21:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:02.895 02:21:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.895 02:21:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.895 02:21:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:02.895 02:21:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.895 02:21:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:02.895 02:21:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:02.895 02:21:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.895 02:21:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.895 02:21:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:02.895 02:21:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.895 02:21:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:02.895 02:21:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:02.895 02:21:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:02.895 02:21:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.895 02:21:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:03.155 /dev/nbd0 00:07:03.155 02:21:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:03.155 02:21:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:03.155 02:21:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:03.155 02:21:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:03.155 02:21:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.155 02:21:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.155 02:21:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:03.155 02:21:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:03.155 02:21:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.155 02:21:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.155 02:21:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.155 1+0 records in 00:07:03.155 1+0 records out 00:07:03.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521422 s, 7.9 MB/s 00:07:03.155 02:21:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.155 02:21:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:03.155 02:21:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.155 02:21:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.155 02:21:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:03.155 02:21:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.155 02:21:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.155 02:21:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:03.415 /dev/nbd1 00:07:03.415 02:21:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:03.415 02:21:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:03.415 02:21:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:03.415 02:21:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:03.415 02:21:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.415 02:21:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.415 02:21:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:03.415 02:21:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:03.415 02:21:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.415 02:21:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.415 02:21:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.415 1+0 records in 00:07:03.415 1+0 records out 00:07:03.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255967 s, 16.0 MB/s 00:07:03.415 02:21:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.415 02:21:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:03.415 02:21:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.415 02:21:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.415 02:21:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:03.415 02:21:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.415 02:21:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.415 02:21:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.415 02:21:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.415 02:21:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:03.675 { 00:07:03.675 "nbd_device": "/dev/nbd0", 00:07:03.675 "bdev_name": "Malloc0" 00:07:03.675 }, 00:07:03.675 { 00:07:03.675 "nbd_device": "/dev/nbd1", 00:07:03.675 "bdev_name": "Malloc1" 00:07:03.675 } 00:07:03.675 ]' 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:03.675 { 00:07:03.675 "nbd_device": "/dev/nbd0", 00:07:03.675 "bdev_name": "Malloc0" 00:07:03.675 }, 00:07:03.675 { 00:07:03.675 "nbd_device": "/dev/nbd1", 00:07:03.675 "bdev_name": "Malloc1" 00:07:03.675 } 00:07:03.675 ]' 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:03.675 /dev/nbd1' 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:03.675 /dev/nbd1' 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:03.675 256+0 records in 00:07:03.675 256+0 records out 00:07:03.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013105 s, 80.0 MB/s 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:03.675 256+0 records in 00:07:03.675 256+0 records out 00:07:03.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270174 s, 38.8 MB/s 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:03.675 256+0 records in 00:07:03.675 256+0 records out 00:07:03.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268616 s, 39.0 MB/s 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:03.675 02:21:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:03.676 02:21:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.676 02:21:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:03.676 02:21:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.676 02:21:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:03.676 02:21:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:03.676 02:21:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:03.676 02:21:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.676 02:21:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.676 02:21:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:03.676 02:21:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:03.676 02:21:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.676 02:21:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.935 02:21:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.935 02:21:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.935 02:21:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.935 02:21:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.935 02:21:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.935 02:21:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.935 02:21:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.935 02:21:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.935 02:21:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.936 02:21:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:04.195 02:21:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:04.195 02:21:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:04.195 02:21:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:04.195 02:21:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.195 02:21:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.195 02:21:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:04.195 02:21:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.195 02:21:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.195 02:21:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.195 02:21:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.196 02:21:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.455 02:21:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:04.455 02:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.455 02:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:04.455 02:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:04.455 02:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:04.455 02:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.455 02:21:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:04.455 02:21:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:04.455 02:21:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:04.455 02:21:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:04.455 02:21:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:04.455 02:21:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:04.456 02:21:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:04.715 02:21:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:04.975 [2024-10-13 02:21:23.403451] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.975 [2024-10-13 02:21:23.448456] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.975 [2024-10-13 02:21:23.448460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.975 [2024-10-13 02:21:23.491381] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:04.975 [2024-10-13 02:21:23.491474] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:08.271 spdk_app_start Round 1 00:07:08.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:08.271 02:21:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:08.271 02:21:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:08.271 02:21:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70215 /var/tmp/spdk-nbd.sock 00:07:08.271 02:21:26 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70215 ']' 00:07:08.271 02:21:26 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:08.271 02:21:26 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.271 02:21:26 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:08.271 02:21:26 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.271 02:21:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:08.271 02:21:26 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.271 02:21:26 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:08.271 02:21:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:08.271 Malloc0 00:07:08.271 02:21:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:08.271 Malloc1 00:07:08.271 02:21:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:08.271 02:21:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.271 02:21:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:08.271 02:21:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:08.271 02:21:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.271 02:21:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:08.271 02:21:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:08.271 02:21:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.271 02:21:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:08.271 02:21:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:08.271 02:21:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.271 02:21:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:08.272 02:21:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:08.272 02:21:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:08.272 02:21:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.272 02:21:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:08.532 /dev/nbd0 00:07:08.532 02:21:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:08.532 02:21:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:08.532 02:21:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:08.532 02:21:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:08.532 02:21:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.532 02:21:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.532 02:21:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:08.532 02:21:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:08.532 02:21:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.532 02:21:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.532 02:21:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:08.532 1+0 records in 00:07:08.532 1+0 records out 00:07:08.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333525 s, 12.3 MB/s 00:07:08.532 02:21:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:08.532 02:21:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:08.532 02:21:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:08.532 02:21:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.532 02:21:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:08.532 02:21:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.532 02:21:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.532 02:21:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:08.802 /dev/nbd1 00:07:08.802 02:21:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:08.802 02:21:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:08.802 02:21:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:08.802 02:21:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:08.802 02:21:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.802 02:21:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.802 02:21:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:08.802 02:21:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:08.802 02:21:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.802 02:21:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.802 02:21:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:08.802 1+0 records in 00:07:08.802 1+0 records out 00:07:08.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400112 s, 10.2 MB/s 00:07:08.802 02:21:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:08.802 02:21:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:08.802 02:21:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:08.802 02:21:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.802 02:21:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:08.802 02:21:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.802 02:21:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.802 02:21:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.802 02:21:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.802 02:21:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.079 02:21:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:09.079 { 00:07:09.079 "nbd_device": "/dev/nbd0", 00:07:09.079 "bdev_name": "Malloc0" 00:07:09.079 }, 00:07:09.079 { 00:07:09.080 "nbd_device": "/dev/nbd1", 00:07:09.080 "bdev_name": "Malloc1" 00:07:09.080 } 00:07:09.080 ]' 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:09.080 { 00:07:09.080 "nbd_device": "/dev/nbd0", 00:07:09.080 "bdev_name": "Malloc0" 00:07:09.080 }, 00:07:09.080 { 00:07:09.080 "nbd_device": "/dev/nbd1", 00:07:09.080 "bdev_name": "Malloc1" 00:07:09.080 } 00:07:09.080 ]' 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:09.080 /dev/nbd1' 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:09.080 /dev/nbd1' 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:09.080 256+0 records in 00:07:09.080 256+0 records out 00:07:09.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121941 s, 86.0 MB/s 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:09.080 256+0 records in 00:07:09.080 256+0 records out 00:07:09.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235706 s, 44.5 MB/s 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:09.080 256+0 records in 00:07:09.080 256+0 records out 00:07:09.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248062 s, 42.3 MB/s 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:09.080 02:21:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:09.340 02:21:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:09.340 02:21:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.340 02:21:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.340 02:21:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:09.340 02:21:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:09.340 02:21:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.340 02:21:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:09.340 02:21:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.340 02:21:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.340 02:21:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.340 02:21:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.340 02:21:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.340 02:21:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.340 02:21:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:09.340 02:21:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.340 02:21:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.340 02:21:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:09.600 02:21:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:09.600 02:21:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:09.600 02:21:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:09.600 02:21:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.600 02:21:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.600 02:21:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:09.600 02:21:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:09.600 02:21:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.600 02:21:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.600 02:21:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.600 02:21:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.860 02:21:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:09.860 02:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.860 02:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:09.860 02:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:09.860 02:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:09.860 02:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.860 02:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:09.860 02:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:09.860 02:21:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:09.860 02:21:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:09.860 02:21:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:09.860 02:21:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:09.860 02:21:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:10.118 02:21:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:10.378 [2024-10-13 02:21:28.836073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.378 [2024-10-13 02:21:28.878667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.378 [2024-10-13 02:21:28.878704] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.378 [2024-10-13 02:21:28.921599] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:10.378 [2024-10-13 02:21:28.921667] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:13.671 spdk_app_start Round 2 00:07:13.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:13.671 02:21:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:13.671 02:21:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:13.671 02:21:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70215 /var/tmp/spdk-nbd.sock 00:07:13.671 02:21:31 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70215 ']' 00:07:13.671 02:21:31 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:13.671 02:21:31 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.671 02:21:31 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:13.671 02:21:31 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.671 02:21:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:13.671 02:21:31 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.671 02:21:31 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:13.671 02:21:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:13.671 Malloc0 00:07:13.671 02:21:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:13.671 Malloc1 00:07:13.671 02:21:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:13.671 02:21:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.671 02:21:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:13.671 02:21:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:13.671 02:21:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.671 02:21:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:13.671 02:21:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:13.671 02:21:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.671 02:21:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:13.671 02:21:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:13.671 02:21:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.671 02:21:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:13.671 02:21:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:13.671 02:21:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:13.671 02:21:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.671 02:21:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:13.931 /dev/nbd0 00:07:13.931 02:21:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:13.931 02:21:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:13.931 02:21:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:13.931 02:21:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:13.931 02:21:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.931 02:21:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.931 02:21:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:13.931 02:21:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:13.931 02:21:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.931 02:21:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.931 02:21:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:13.931 1+0 records in 00:07:13.931 1+0 records out 00:07:13.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356307 s, 11.5 MB/s 00:07:13.931 02:21:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:13.931 02:21:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:13.931 02:21:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:13.931 02:21:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.931 02:21:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:13.931 02:21:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.931 02:21:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.931 02:21:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:14.191 /dev/nbd1 00:07:14.191 02:21:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:14.191 02:21:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:14.191 02:21:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:14.191 02:21:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:14.191 02:21:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.191 02:21:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.191 02:21:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:14.191 02:21:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:14.191 02:21:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.191 02:21:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.191 02:21:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:14.191 1+0 records in 00:07:14.191 1+0 records out 00:07:14.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035494 s, 11.5 MB/s 00:07:14.191 02:21:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:14.191 02:21:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:14.191 02:21:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:14.191 02:21:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.191 02:21:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:14.191 02:21:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.191 02:21:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:14.191 02:21:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.191 02:21:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.191 02:21:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.451 02:21:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:14.451 { 00:07:14.451 "nbd_device": "/dev/nbd0", 00:07:14.451 "bdev_name": "Malloc0" 00:07:14.451 }, 00:07:14.451 { 00:07:14.451 "nbd_device": "/dev/nbd1", 00:07:14.451 "bdev_name": "Malloc1" 00:07:14.451 } 00:07:14.451 ]' 00:07:14.451 02:21:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:14.451 { 00:07:14.451 "nbd_device": "/dev/nbd0", 00:07:14.451 "bdev_name": "Malloc0" 00:07:14.451 }, 00:07:14.451 { 00:07:14.451 "nbd_device": "/dev/nbd1", 00:07:14.451 "bdev_name": "Malloc1" 00:07:14.451 } 00:07:14.451 ]' 00:07:14.451 02:21:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.451 02:21:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:14.451 /dev/nbd1' 00:07:14.451 02:21:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.451 02:21:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:14.451 /dev/nbd1' 00:07:14.451 02:21:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:14.451 02:21:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:14.451 02:21:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:14.451 02:21:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:14.451 02:21:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:14.451 02:21:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.451 02:21:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.451 02:21:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:14.451 02:21:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:14.451 02:21:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:14.451 02:21:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:14.451 256+0 records in 00:07:14.451 256+0 records out 00:07:14.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134298 s, 78.1 MB/s 00:07:14.451 02:21:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.451 02:21:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:14.451 256+0 records in 00:07:14.451 256+0 records out 00:07:14.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203255 s, 51.6 MB/s 00:07:14.451 02:21:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.451 02:21:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:14.451 256+0 records in 00:07:14.451 256+0 records out 00:07:14.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236672 s, 44.3 MB/s 00:07:14.452 02:21:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:14.452 02:21:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.452 02:21:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.452 02:21:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:14.452 02:21:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:14.452 02:21:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:14.452 02:21:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:14.452 02:21:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.452 02:21:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.711 02:21:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:14.970 02:21:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.970 02:21:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.970 02:21:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.970 02:21:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.970 02:21:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.970 02:21:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.970 02:21:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:14.970 02:21:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.970 02:21:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.970 02:21:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.970 02:21:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.230 02:21:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:15.230 02:21:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.230 02:21:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:15.230 02:21:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:15.230 02:21:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:15.230 02:21:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.230 02:21:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:15.230 02:21:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:15.230 02:21:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:15.230 02:21:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:15.230 02:21:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:15.230 02:21:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:15.230 02:21:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:15.489 02:21:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:15.755 [2024-10-13 02:21:34.201365] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:15.755 [2024-10-13 02:21:34.245598] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.755 [2024-10-13 02:21:34.245604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.755 [2024-10-13 02:21:34.287791] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:15.755 [2024-10-13 02:21:34.287855] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:19.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:19.061 02:21:37 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70215 /var/tmp/spdk-nbd.sock 00:07:19.061 02:21:37 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70215 ']' 00:07:19.061 02:21:37 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:19.061 02:21:37 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.061 02:21:37 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:19.061 02:21:37 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.061 02:21:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:19.061 02:21:37 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.061 02:21:37 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:19.061 02:21:37 event.app_repeat -- event/event.sh@39 -- # killprocess 70215 00:07:19.061 02:21:37 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70215 ']' 00:07:19.061 02:21:37 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70215 00:07:19.061 02:21:37 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:19.061 02:21:37 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.061 02:21:37 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70215 00:07:19.061 killing process with pid 70215 00:07:19.061 02:21:37 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.061 02:21:37 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.061 02:21:37 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70215' 00:07:19.061 02:21:37 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70215 00:07:19.061 02:21:37 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70215 00:07:19.061 spdk_app_start is called in Round 0. 00:07:19.061 Shutdown signal received, stop current app iteration 00:07:19.061 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:07:19.062 spdk_app_start is called in Round 1. 00:07:19.062 Shutdown signal received, stop current app iteration 00:07:19.062 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:07:19.062 spdk_app_start is called in Round 2. 00:07:19.062 Shutdown signal received, stop current app iteration 00:07:19.062 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:07:19.062 spdk_app_start is called in Round 3. 00:07:19.062 Shutdown signal received, stop current app iteration 00:07:19.062 02:21:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:19.062 02:21:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:19.062 00:07:19.062 real 0m17.411s 00:07:19.062 user 0m38.447s 00:07:19.062 sys 0m2.463s 00:07:19.062 02:21:37 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.062 02:21:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:19.062 ************************************ 00:07:19.062 END TEST app_repeat 00:07:19.062 ************************************ 00:07:19.062 02:21:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:19.062 02:21:37 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:19.062 02:21:37 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:19.062 02:21:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.062 02:21:37 event -- common/autotest_common.sh@10 -- # set +x 00:07:19.062 ************************************ 00:07:19.062 START TEST cpu_locks 00:07:19.062 ************************************ 00:07:19.062 02:21:37 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:19.062 * Looking for test storage... 00:07:19.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:19.062 02:21:37 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:19.062 02:21:37 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:19.062 02:21:37 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:19.322 02:21:37 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:19.322 02:21:37 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.322 02:21:37 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.322 02:21:37 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.322 02:21:37 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.322 02:21:37 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.322 02:21:37 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.322 02:21:37 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.322 02:21:37 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.322 02:21:37 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.322 02:21:37 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.322 02:21:37 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.322 02:21:37 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:19.322 02:21:37 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:19.323 02:21:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.323 02:21:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.323 02:21:37 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:19.323 02:21:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:19.323 02:21:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.323 02:21:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:19.323 02:21:37 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.323 02:21:37 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:19.323 02:21:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:19.323 02:21:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.323 02:21:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:19.323 02:21:37 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.323 02:21:37 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.323 02:21:37 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.323 02:21:37 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:19.323 02:21:37 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.323 02:21:37 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:19.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.323 --rc genhtml_branch_coverage=1 00:07:19.323 --rc genhtml_function_coverage=1 00:07:19.323 --rc genhtml_legend=1 00:07:19.323 --rc geninfo_all_blocks=1 00:07:19.323 --rc geninfo_unexecuted_blocks=1 00:07:19.323 00:07:19.323 ' 00:07:19.323 02:21:37 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:19.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.323 --rc genhtml_branch_coverage=1 00:07:19.323 --rc genhtml_function_coverage=1 00:07:19.323 --rc genhtml_legend=1 00:07:19.323 --rc geninfo_all_blocks=1 00:07:19.323 --rc geninfo_unexecuted_blocks=1 00:07:19.323 00:07:19.323 ' 00:07:19.323 02:21:37 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:19.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.323 --rc genhtml_branch_coverage=1 00:07:19.323 --rc genhtml_function_coverage=1 00:07:19.323 --rc genhtml_legend=1 00:07:19.323 --rc geninfo_all_blocks=1 00:07:19.323 --rc geninfo_unexecuted_blocks=1 00:07:19.323 00:07:19.323 ' 00:07:19.323 02:21:37 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:19.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.323 --rc genhtml_branch_coverage=1 00:07:19.323 --rc genhtml_function_coverage=1 00:07:19.323 --rc genhtml_legend=1 00:07:19.323 --rc geninfo_all_blocks=1 00:07:19.323 --rc geninfo_unexecuted_blocks=1 00:07:19.323 00:07:19.323 ' 00:07:19.323 02:21:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:19.324 02:21:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:19.324 02:21:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:19.324 02:21:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:19.324 02:21:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:19.324 02:21:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.324 02:21:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.324 ************************************ 00:07:19.324 START TEST default_locks 00:07:19.324 ************************************ 00:07:19.324 02:21:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:19.324 02:21:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70642 00:07:19.324 02:21:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70642 00:07:19.324 02:21:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:19.324 02:21:37 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70642 ']' 00:07:19.324 02:21:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.324 02:21:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.324 02:21:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.324 02:21:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.324 02:21:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.324 [2024-10-13 02:21:37.891595] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:19.324 [2024-10-13 02:21:37.891723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70642 ] 00:07:19.584 [2024-10-13 02:21:38.037844] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.584 [2024-10-13 02:21:38.089579] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.154 02:21:38 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.154 02:21:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:20.154 02:21:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70642 00:07:20.154 02:21:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70642 00:07:20.154 02:21:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:20.413 02:21:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70642 00:07:20.413 02:21:38 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70642 ']' 00:07:20.413 02:21:38 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70642 00:07:20.413 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:20.413 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.413 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70642 00:07:20.413 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:20.413 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:20.413 killing process with pid 70642 00:07:20.413 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70642' 00:07:20.413 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70642 00:07:20.413 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70642 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70642 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70642 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70642 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70642 ']' 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.983 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70642) - No such process 00:07:20.983 ERROR: process (pid: 70642) is no longer running 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:20.983 00:07:20.983 real 0m1.656s 00:07:20.983 user 0m1.611s 00:07:20.983 sys 0m0.563s 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.983 02:21:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.983 ************************************ 00:07:20.983 END TEST default_locks 00:07:20.983 ************************************ 00:07:20.983 02:21:39 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:20.983 02:21:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.983 02:21:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.983 02:21:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.983 ************************************ 00:07:20.983 START TEST default_locks_via_rpc 00:07:20.983 ************************************ 00:07:20.983 02:21:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:20.983 02:21:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70690 00:07:20.983 02:21:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:20.983 02:21:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70690 00:07:20.983 02:21:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70690 ']' 00:07:20.983 02:21:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.983 02:21:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.983 02:21:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.983 02:21:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.983 02:21:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.983 [2024-10-13 02:21:39.617992] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:20.983 [2024-10-13 02:21:39.618145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70690 ] 00:07:21.244 [2024-10-13 02:21:39.761475] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.244 [2024-10-13 02:21:39.811352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.813 02:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.813 02:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:21.813 02:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:21.813 02:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.813 02:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.813 02:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.813 02:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:21.813 02:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:21.813 02:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:21.813 02:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:21.813 02:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:21.813 02:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.813 02:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.813 02:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.813 02:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70690 00:07:21.813 02:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70690 00:07:21.813 02:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:22.074 02:21:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70690 00:07:22.074 02:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70690 ']' 00:07:22.074 02:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70690 00:07:22.074 02:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:22.074 02:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.074 02:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70690 00:07:22.332 02:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.332 02:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.332 killing process with pid 70690 00:07:22.332 02:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70690' 00:07:22.333 02:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70690 00:07:22.333 02:21:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70690 00:07:22.598 00:07:22.598 real 0m1.632s 00:07:22.598 user 0m1.615s 00:07:22.598 sys 0m0.552s 00:07:22.598 02:21:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.598 02:21:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.598 ************************************ 00:07:22.598 END TEST default_locks_via_rpc 00:07:22.598 ************************************ 00:07:22.598 02:21:41 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:22.598 02:21:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.598 02:21:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.598 02:21:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.598 ************************************ 00:07:22.598 START TEST non_locking_app_on_locked_coremask 00:07:22.598 ************************************ 00:07:22.598 02:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:22.598 02:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70742 00:07:22.598 02:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:22.598 02:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70742 /var/tmp/spdk.sock 00:07:22.598 02:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70742 ']' 00:07:22.598 02:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.598 02:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.598 02:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.598 02:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.598 02:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.860 [2024-10-13 02:21:41.319629] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:22.860 [2024-10-13 02:21:41.319794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70742 ] 00:07:22.860 [2024-10-13 02:21:41.467273] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.860 [2024-10-13 02:21:41.514926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.800 02:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.800 02:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:23.800 02:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:23.800 02:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70759 00:07:23.800 02:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70759 /var/tmp/spdk2.sock 00:07:23.800 02:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70759 ']' 00:07:23.800 02:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.800 02:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.800 02:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.800 02:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.800 02:21:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.800 [2024-10-13 02:21:42.245881] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:23.800 [2024-10-13 02:21:42.246032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70759 ] 00:07:23.800 [2024-10-13 02:21:42.380023] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:23.800 [2024-10-13 02:21:42.380083] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.800 [2024-10-13 02:21:42.473006] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.740 02:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.740 02:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:24.740 02:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70742 00:07:24.740 02:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70742 00:07:24.740 02:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.999 02:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70742 00:07:24.999 02:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70742 ']' 00:07:24.999 02:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70742 00:07:24.999 02:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:25.000 02:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.000 02:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70742 00:07:25.000 02:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.000 02:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.000 killing process with pid 70742 00:07:25.000 02:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70742' 00:07:25.000 02:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70742 00:07:25.000 02:21:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70742 00:07:25.939 02:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70759 00:07:25.939 02:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70759 ']' 00:07:25.939 02:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70759 00:07:25.939 02:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:25.939 02:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.939 02:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70759 00:07:25.939 02:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.939 02:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.939 killing process with pid 70759 00:07:25.939 02:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70759' 00:07:25.939 02:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70759 00:07:25.939 02:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70759 00:07:26.199 00:07:26.199 real 0m3.626s 00:07:26.199 user 0m3.797s 00:07:26.199 sys 0m1.097s 00:07:26.199 02:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.199 02:21:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.199 ************************************ 00:07:26.199 END TEST non_locking_app_on_locked_coremask 00:07:26.199 ************************************ 00:07:26.459 02:21:44 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:26.459 02:21:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.459 02:21:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.459 02:21:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.459 ************************************ 00:07:26.459 START TEST locking_app_on_unlocked_coremask 00:07:26.459 ************************************ 00:07:26.459 02:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:26.459 02:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70817 00:07:26.459 02:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70817 /var/tmp/spdk.sock 00:07:26.459 02:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:26.459 02:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70817 ']' 00:07:26.459 02:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.459 02:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.459 02:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.459 02:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.459 02:21:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.459 [2024-10-13 02:21:45.026320] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:26.459 [2024-10-13 02:21:45.026929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70817 ] 00:07:26.718 [2024-10-13 02:21:45.154275] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:26.718 [2024-10-13 02:21:45.154331] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.718 [2024-10-13 02:21:45.204290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.288 02:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.288 02:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:27.288 02:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70833 00:07:27.288 02:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70833 /var/tmp/spdk2.sock 00:07:27.288 02:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:27.288 02:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70833 ']' 00:07:27.288 02:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.288 02:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.288 02:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.288 02:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.288 02:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.288 [2024-10-13 02:21:45.929425] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:27.289 [2024-10-13 02:21:45.929574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70833 ] 00:07:27.548 [2024-10-13 02:21:46.066450] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.548 [2024-10-13 02:21:46.169789] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.118 02:21:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.118 02:21:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:28.118 02:21:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70833 00:07:28.118 02:21:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70833 00:07:28.118 02:21:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:28.687 02:21:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70817 00:07:28.687 02:21:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70817 ']' 00:07:28.687 02:21:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70817 00:07:28.687 02:21:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:28.687 02:21:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.687 02:21:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70817 00:07:28.687 02:21:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.687 02:21:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.687 killing process with pid 70817 00:07:28.687 02:21:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70817' 00:07:28.687 02:21:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70817 00:07:28.687 02:21:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70817 00:07:29.640 02:21:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70833 00:07:29.640 02:21:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70833 ']' 00:07:29.640 02:21:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70833 00:07:29.640 02:21:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:29.640 02:21:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.640 02:21:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70833 00:07:29.640 02:21:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.640 02:21:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.640 killing process with pid 70833 00:07:29.640 02:21:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70833' 00:07:29.640 02:21:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70833 00:07:29.640 02:21:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70833 00:07:29.900 00:07:29.900 real 0m3.623s 00:07:29.900 user 0m3.817s 00:07:29.900 sys 0m1.068s 00:07:29.900 02:21:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.900 02:21:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.900 ************************************ 00:07:29.900 END TEST locking_app_on_unlocked_coremask 00:07:29.900 ************************************ 00:07:30.160 02:21:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:30.160 02:21:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.160 02:21:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.160 02:21:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.160 ************************************ 00:07:30.160 START TEST locking_app_on_locked_coremask 00:07:30.160 ************************************ 00:07:30.160 02:21:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:30.160 02:21:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70902 00:07:30.160 02:21:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70902 /var/tmp/spdk.sock 00:07:30.160 02:21:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:30.160 02:21:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70902 ']' 00:07:30.160 02:21:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.160 02:21:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.160 02:21:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.160 02:21:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.160 02:21:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.160 [2024-10-13 02:21:48.724525] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:30.160 [2024-10-13 02:21:48.724679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70902 ] 00:07:30.420 [2024-10-13 02:21:48.852265] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.420 [2024-10-13 02:21:48.899712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.989 02:21:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.989 02:21:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:30.989 02:21:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70918 00:07:30.989 02:21:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70918 /var/tmp/spdk2.sock 00:07:30.989 02:21:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:30.989 02:21:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:30.989 02:21:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70918 /var/tmp/spdk2.sock 00:07:30.989 02:21:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:30.989 02:21:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.989 02:21:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:30.989 02:21:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.989 02:21:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 70918 /var/tmp/spdk2.sock 00:07:30.989 02:21:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70918 ']' 00:07:30.989 02:21:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:30.989 02:21:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:30.989 02:21:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:30.989 02:21:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.989 02:21:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.989 [2024-10-13 02:21:49.630972] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:30.989 [2024-10-13 02:21:49.631110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70918 ] 00:07:31.250 [2024-10-13 02:21:49.764569] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70902 has claimed it. 00:07:31.250 [2024-10-13 02:21:49.764633] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:31.829 ERROR: process (pid: 70918) is no longer running 00:07:31.829 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70918) - No such process 00:07:31.829 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.829 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:31.829 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:31.829 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:31.829 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:31.829 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:31.829 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70902 00:07:31.829 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70902 00:07:31.829 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:32.398 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70902 00:07:32.398 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70902 ']' 00:07:32.398 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70902 00:07:32.398 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:32.398 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:32.398 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70902 00:07:32.398 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:32.398 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:32.398 killing process with pid 70902 00:07:32.398 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70902' 00:07:32.398 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70902 00:07:32.398 02:21:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70902 00:07:32.657 00:07:32.657 real 0m2.603s 00:07:32.657 user 0m2.789s 00:07:32.657 sys 0m0.784s 00:07:32.657 02:21:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.657 02:21:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.657 ************************************ 00:07:32.658 END TEST locking_app_on_locked_coremask 00:07:32.658 ************************************ 00:07:32.658 02:21:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:32.658 02:21:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.658 02:21:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.658 02:21:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.658 ************************************ 00:07:32.658 START TEST locking_overlapped_coremask 00:07:32.658 ************************************ 00:07:32.658 02:21:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:32.658 02:21:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70960 00:07:32.658 02:21:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 70960 /var/tmp/spdk.sock 00:07:32.658 02:21:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:32.658 02:21:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 70960 ']' 00:07:32.658 02:21:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.658 02:21:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.658 02:21:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.658 02:21:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.658 02:21:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.917 [2024-10-13 02:21:51.390682] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:32.917 [2024-10-13 02:21:51.390814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70960 ] 00:07:32.917 [2024-10-13 02:21:51.537358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.917 [2024-10-13 02:21:51.582915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.917 [2024-10-13 02:21:51.582963] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.917 [2024-10-13 02:21:51.583035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.854 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.854 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:33.854 02:21:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:33.854 02:21:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70978 00:07:33.854 02:21:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70978 /var/tmp/spdk2.sock 00:07:33.854 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:33.854 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70978 /var/tmp/spdk2.sock 00:07:33.854 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:33.854 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.854 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:33.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.854 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.854 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 70978 /var/tmp/spdk2.sock 00:07:33.854 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 70978 ']' 00:07:33.854 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.854 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.854 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.854 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.854 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.854 [2024-10-13 02:21:52.299459] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:33.854 [2024-10-13 02:21:52.299633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70978 ] 00:07:33.854 [2024-10-13 02:21:52.437745] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70960 has claimed it. 00:07:33.854 [2024-10-13 02:21:52.437811] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:34.421 ERROR: process (pid: 70978) is no longer running 00:07:34.421 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70978) - No such process 00:07:34.421 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.421 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:34.421 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:34.421 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:34.421 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:34.421 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:34.421 02:21:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:34.421 02:21:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:34.421 02:21:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:34.421 02:21:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:34.421 02:21:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 70960 00:07:34.421 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 70960 ']' 00:07:34.421 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 70960 00:07:34.421 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:34.422 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.422 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70960 00:07:34.422 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.422 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.422 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70960' 00:07:34.422 killing process with pid 70960 00:07:34.422 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 70960 00:07:34.422 02:21:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 70960 00:07:34.992 00:07:34.992 real 0m2.105s 00:07:34.992 user 0m5.599s 00:07:34.992 sys 0m0.522s 00:07:34.992 02:21:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.992 02:21:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.992 ************************************ 00:07:34.992 END TEST locking_overlapped_coremask 00:07:34.992 ************************************ 00:07:34.992 02:21:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:34.992 02:21:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.992 02:21:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.992 02:21:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.992 ************************************ 00:07:34.992 START TEST locking_overlapped_coremask_via_rpc 00:07:34.992 ************************************ 00:07:34.992 02:21:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:34.992 02:21:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71028 00:07:34.992 02:21:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71028 /var/tmp/spdk.sock 00:07:34.992 02:21:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:34.992 02:21:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71028 ']' 00:07:34.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.992 02:21:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.992 02:21:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.992 02:21:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.992 02:21:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.992 02:21:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.992 [2024-10-13 02:21:53.575400] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:34.992 [2024-10-13 02:21:53.575609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71028 ] 00:07:35.251 [2024-10-13 02:21:53.724073] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:35.251 [2024-10-13 02:21:53.724248] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.251 [2024-10-13 02:21:53.773151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.251 [2024-10-13 02:21:53.773256] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.251 [2024-10-13 02:21:53.773382] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.818 02:21:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.818 02:21:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:35.818 02:21:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71040 00:07:35.818 02:21:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71040 /var/tmp/spdk2.sock 00:07:35.818 02:21:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71040 ']' 00:07:35.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:35.819 02:21:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:35.819 02:21:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.819 02:21:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:35.819 02:21:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.819 02:21:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.819 02:21:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:35.819 [2024-10-13 02:21:54.487873] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:35.819 [2024-10-13 02:21:54.488012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71040 ] 00:07:36.077 [2024-10-13 02:21:54.624681] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:36.077 [2024-10-13 02:21:54.624738] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.077 [2024-10-13 02:21:54.724041] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.077 [2024-10-13 02:21:54.727046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.077 [2024-10-13 02:21:54.727169] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:36.645 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.645 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:36.645 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:36.645 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.645 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.904 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.904 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:36.904 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:36.904 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:36.904 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:36.904 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.904 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:36.904 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.904 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:36.904 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.905 [2024-10-13 02:21:55.348076] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71028 has claimed it. 00:07:36.905 request: 00:07:36.905 { 00:07:36.905 "method": "framework_enable_cpumask_locks", 00:07:36.905 "req_id": 1 00:07:36.905 } 00:07:36.905 Got JSON-RPC error response 00:07:36.905 response: 00:07:36.905 { 00:07:36.905 "code": -32603, 00:07:36.905 "message": "Failed to claim CPU core: 2" 00:07:36.905 } 00:07:36.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71028 /var/tmp/spdk.sock 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71028 ']' 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71040 /var/tmp/spdk2.sock 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71040 ']' 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:36.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.905 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.164 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.164 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:37.164 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:37.164 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:37.164 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:37.164 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:37.164 00:07:37.164 real 0m2.318s 00:07:37.164 user 0m1.044s 00:07:37.164 sys 0m0.197s 00:07:37.164 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.164 02:21:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.164 ************************************ 00:07:37.164 END TEST locking_overlapped_coremask_via_rpc 00:07:37.164 ************************************ 00:07:37.423 02:21:55 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:37.423 02:21:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71028 ]] 00:07:37.423 02:21:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71028 00:07:37.423 02:21:55 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71028 ']' 00:07:37.423 02:21:55 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71028 00:07:37.423 02:21:55 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:37.423 02:21:55 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.423 02:21:55 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71028 00:07:37.423 02:21:55 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.423 02:21:55 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.423 02:21:55 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71028' 00:07:37.423 killing process with pid 71028 00:07:37.423 02:21:55 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71028 00:07:37.423 02:21:55 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71028 00:07:37.682 02:21:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71040 ]] 00:07:37.682 02:21:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71040 00:07:37.682 02:21:56 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71040 ']' 00:07:37.682 02:21:56 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71040 00:07:37.682 02:21:56 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:37.682 02:21:56 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.682 02:21:56 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71040 00:07:37.682 killing process with pid 71040 00:07:37.682 02:21:56 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:37.682 02:21:56 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:37.682 02:21:56 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71040' 00:07:37.682 02:21:56 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71040 00:07:37.682 02:21:56 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71040 00:07:38.252 02:21:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:38.252 02:21:56 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:38.252 02:21:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71028 ]] 00:07:38.252 02:21:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71028 00:07:38.252 02:21:56 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71028 ']' 00:07:38.252 02:21:56 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71028 00:07:38.252 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71028) - No such process 00:07:38.252 Process with pid 71028 is not found 00:07:38.252 Process with pid 71040 is not found 00:07:38.252 02:21:56 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71028 is not found' 00:07:38.252 02:21:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71040 ]] 00:07:38.252 02:21:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71040 00:07:38.252 02:21:56 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71040 ']' 00:07:38.252 02:21:56 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71040 00:07:38.252 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71040) - No such process 00:07:38.252 02:21:56 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71040 is not found' 00:07:38.252 02:21:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:38.252 00:07:38.252 real 0m19.200s 00:07:38.252 user 0m31.859s 00:07:38.252 sys 0m5.964s 00:07:38.252 02:21:56 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.252 02:21:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.252 ************************************ 00:07:38.252 END TEST cpu_locks 00:07:38.252 ************************************ 00:07:38.252 00:07:38.252 real 0m47.050s 00:07:38.252 user 1m28.954s 00:07:38.252 sys 0m9.594s 00:07:38.252 02:21:56 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.252 02:21:56 event -- common/autotest_common.sh@10 -- # set +x 00:07:38.252 ************************************ 00:07:38.252 END TEST event 00:07:38.252 ************************************ 00:07:38.252 02:21:56 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:38.252 02:21:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.252 02:21:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.252 02:21:56 -- common/autotest_common.sh@10 -- # set +x 00:07:38.252 ************************************ 00:07:38.252 START TEST thread 00:07:38.252 ************************************ 00:07:38.252 02:21:56 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:38.512 * Looking for test storage... 00:07:38.512 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:38.512 02:21:57 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:38.512 02:21:57 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:38.512 02:21:57 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:38.512 02:21:57 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:38.512 02:21:57 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.512 02:21:57 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.512 02:21:57 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.512 02:21:57 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.512 02:21:57 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.512 02:21:57 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.512 02:21:57 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.513 02:21:57 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.513 02:21:57 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.513 02:21:57 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.513 02:21:57 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.513 02:21:57 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:38.513 02:21:57 thread -- scripts/common.sh@345 -- # : 1 00:07:38.513 02:21:57 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.513 02:21:57 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.513 02:21:57 thread -- scripts/common.sh@365 -- # decimal 1 00:07:38.513 02:21:57 thread -- scripts/common.sh@353 -- # local d=1 00:07:38.513 02:21:57 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.513 02:21:57 thread -- scripts/common.sh@355 -- # echo 1 00:07:38.513 02:21:57 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.513 02:21:57 thread -- scripts/common.sh@366 -- # decimal 2 00:07:38.513 02:21:57 thread -- scripts/common.sh@353 -- # local d=2 00:07:38.513 02:21:57 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.513 02:21:57 thread -- scripts/common.sh@355 -- # echo 2 00:07:38.513 02:21:57 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.513 02:21:57 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.513 02:21:57 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.513 02:21:57 thread -- scripts/common.sh@368 -- # return 0 00:07:38.513 02:21:57 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.513 02:21:57 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:38.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.513 --rc genhtml_branch_coverage=1 00:07:38.513 --rc genhtml_function_coverage=1 00:07:38.513 --rc genhtml_legend=1 00:07:38.513 --rc geninfo_all_blocks=1 00:07:38.513 --rc geninfo_unexecuted_blocks=1 00:07:38.513 00:07:38.513 ' 00:07:38.513 02:21:57 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:38.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.513 --rc genhtml_branch_coverage=1 00:07:38.513 --rc genhtml_function_coverage=1 00:07:38.513 --rc genhtml_legend=1 00:07:38.513 --rc geninfo_all_blocks=1 00:07:38.513 --rc geninfo_unexecuted_blocks=1 00:07:38.513 00:07:38.513 ' 00:07:38.513 02:21:57 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:38.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.513 --rc genhtml_branch_coverage=1 00:07:38.513 --rc genhtml_function_coverage=1 00:07:38.513 --rc genhtml_legend=1 00:07:38.513 --rc geninfo_all_blocks=1 00:07:38.513 --rc geninfo_unexecuted_blocks=1 00:07:38.513 00:07:38.513 ' 00:07:38.513 02:21:57 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:38.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.513 --rc genhtml_branch_coverage=1 00:07:38.513 --rc genhtml_function_coverage=1 00:07:38.513 --rc genhtml_legend=1 00:07:38.513 --rc geninfo_all_blocks=1 00:07:38.513 --rc geninfo_unexecuted_blocks=1 00:07:38.513 00:07:38.513 ' 00:07:38.513 02:21:57 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:38.513 02:21:57 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:38.513 02:21:57 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.513 02:21:57 thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.513 ************************************ 00:07:38.513 START TEST thread_poller_perf 00:07:38.513 ************************************ 00:07:38.513 02:21:57 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:38.513 [2024-10-13 02:21:57.166863] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:38.513 [2024-10-13 02:21:57.167573] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71178 ] 00:07:38.772 [2024-10-13 02:21:57.312974] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.772 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:38.772 [2024-10-13 02:21:57.357925] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.153 [2024-10-13T02:21:58.837Z] ====================================== 00:07:40.153 [2024-10-13T02:21:58.837Z] busy:2296697134 (cyc) 00:07:40.153 [2024-10-13T02:21:58.837Z] total_run_count: 413000 00:07:40.153 [2024-10-13T02:21:58.837Z] tsc_hz: 2290000000 (cyc) 00:07:40.153 [2024-10-13T02:21:58.837Z] ====================================== 00:07:40.153 [2024-10-13T02:21:58.837Z] poller_cost: 5561 (cyc), 2428 (nsec) 00:07:40.153 ************************************ 00:07:40.153 END TEST thread_poller_perf 00:07:40.153 ************************************ 00:07:40.153 00:07:40.153 real 0m1.328s 00:07:40.153 user 0m1.139s 00:07:40.153 sys 0m0.083s 00:07:40.153 02:21:58 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.153 02:21:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:40.153 02:21:58 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:40.153 02:21:58 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:40.153 02:21:58 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.153 02:21:58 thread -- common/autotest_common.sh@10 -- # set +x 00:07:40.153 ************************************ 00:07:40.153 START TEST thread_poller_perf 00:07:40.153 ************************************ 00:07:40.153 02:21:58 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:40.153 [2024-10-13 02:21:58.565781] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:40.153 [2024-10-13 02:21:58.565950] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71209 ] 00:07:40.153 [2024-10-13 02:21:58.711477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.153 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:40.153 [2024-10-13 02:21:58.760683] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.535 [2024-10-13T02:22:00.219Z] ====================================== 00:07:41.535 [2024-10-13T02:22:00.219Z] busy:2293417976 (cyc) 00:07:41.535 [2024-10-13T02:22:00.219Z] total_run_count: 5386000 00:07:41.535 [2024-10-13T02:22:00.219Z] tsc_hz: 2290000000 (cyc) 00:07:41.535 [2024-10-13T02:22:00.219Z] ====================================== 00:07:41.535 [2024-10-13T02:22:00.219Z] poller_cost: 425 (cyc), 185 (nsec) 00:07:41.535 00:07:41.535 real 0m1.330s 00:07:41.535 user 0m1.133s 00:07:41.535 sys 0m0.091s 00:07:41.535 02:21:59 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.535 ************************************ 00:07:41.535 END TEST thread_poller_perf 00:07:41.535 ************************************ 00:07:41.536 02:21:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:41.536 02:21:59 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:41.536 ************************************ 00:07:41.536 END TEST thread 00:07:41.536 ************************************ 00:07:41.536 00:07:41.536 real 0m3.020s 00:07:41.536 user 0m2.444s 00:07:41.536 sys 0m0.377s 00:07:41.536 02:21:59 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.536 02:21:59 thread -- common/autotest_common.sh@10 -- # set +x 00:07:41.536 02:21:59 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:41.536 02:21:59 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:41.536 02:21:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.536 02:21:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.536 02:21:59 -- common/autotest_common.sh@10 -- # set +x 00:07:41.536 ************************************ 00:07:41.536 START TEST app_cmdline 00:07:41.536 ************************************ 00:07:41.536 02:21:59 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:41.536 * Looking for test storage... 00:07:41.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:41.536 02:22:00 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:41.536 02:22:00 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:41.536 02:22:00 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:41.536 02:22:00 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.536 02:22:00 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:41.536 02:22:00 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.536 02:22:00 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:41.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.536 --rc genhtml_branch_coverage=1 00:07:41.536 --rc genhtml_function_coverage=1 00:07:41.536 --rc genhtml_legend=1 00:07:41.536 --rc geninfo_all_blocks=1 00:07:41.536 --rc geninfo_unexecuted_blocks=1 00:07:41.536 00:07:41.536 ' 00:07:41.536 02:22:00 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:41.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.536 --rc genhtml_branch_coverage=1 00:07:41.536 --rc genhtml_function_coverage=1 00:07:41.536 --rc genhtml_legend=1 00:07:41.536 --rc geninfo_all_blocks=1 00:07:41.536 --rc geninfo_unexecuted_blocks=1 00:07:41.536 00:07:41.536 ' 00:07:41.536 02:22:00 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:41.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.536 --rc genhtml_branch_coverage=1 00:07:41.536 --rc genhtml_function_coverage=1 00:07:41.536 --rc genhtml_legend=1 00:07:41.536 --rc geninfo_all_blocks=1 00:07:41.536 --rc geninfo_unexecuted_blocks=1 00:07:41.536 00:07:41.536 ' 00:07:41.536 02:22:00 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:41.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.536 --rc genhtml_branch_coverage=1 00:07:41.536 --rc genhtml_function_coverage=1 00:07:41.536 --rc genhtml_legend=1 00:07:41.536 --rc geninfo_all_blocks=1 00:07:41.536 --rc geninfo_unexecuted_blocks=1 00:07:41.536 00:07:41.536 ' 00:07:41.536 02:22:00 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:41.536 02:22:00 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71298 00:07:41.536 02:22:00 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:41.536 02:22:00 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71298 00:07:41.536 02:22:00 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71298 ']' 00:07:41.536 02:22:00 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.536 02:22:00 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.536 02:22:00 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.536 02:22:00 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.536 02:22:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:41.796 [2024-10-13 02:22:00.302927] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:41.796 [2024-10-13 02:22:00.303179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71298 ] 00:07:41.796 [2024-10-13 02:22:00.450801] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.056 [2024-10-13 02:22:00.499190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.624 02:22:01 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.624 02:22:01 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:42.624 02:22:01 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:42.624 { 00:07:42.624 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:07:42.624 "fields": { 00:07:42.624 "major": 24, 00:07:42.624 "minor": 9, 00:07:42.624 "patch": 1, 00:07:42.624 "suffix": "-pre", 00:07:42.624 "commit": "b18e1bd62" 00:07:42.624 } 00:07:42.624 } 00:07:42.624 02:22:01 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:42.624 02:22:01 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:42.624 02:22:01 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:42.624 02:22:01 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:42.624 02:22:01 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:42.625 02:22:01 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.625 02:22:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:42.625 02:22:01 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:42.625 02:22:01 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:42.884 02:22:01 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.884 02:22:01 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:42.884 02:22:01 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:42.884 02:22:01 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.884 02:22:01 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:42.884 02:22:01 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.884 02:22:01 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.884 02:22:01 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.884 02:22:01 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.884 02:22:01 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.884 02:22:01 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.884 02:22:01 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.884 02:22:01 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.885 02:22:01 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:42.885 02:22:01 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.885 request: 00:07:42.885 { 00:07:42.885 "method": "env_dpdk_get_mem_stats", 00:07:42.885 "req_id": 1 00:07:42.885 } 00:07:42.885 Got JSON-RPC error response 00:07:42.885 response: 00:07:42.885 { 00:07:42.885 "code": -32601, 00:07:42.885 "message": "Method not found" 00:07:42.885 } 00:07:42.885 02:22:01 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:42.885 02:22:01 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.885 02:22:01 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.885 02:22:01 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.885 02:22:01 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71298 00:07:42.885 02:22:01 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71298 ']' 00:07:42.885 02:22:01 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71298 00:07:42.885 02:22:01 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:42.885 02:22:01 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.144 02:22:01 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71298 00:07:43.144 killing process with pid 71298 00:07:43.144 02:22:01 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.144 02:22:01 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.144 02:22:01 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71298' 00:07:43.144 02:22:01 app_cmdline -- common/autotest_common.sh@969 -- # kill 71298 00:07:43.144 02:22:01 app_cmdline -- common/autotest_common.sh@974 -- # wait 71298 00:07:43.404 00:07:43.404 real 0m2.016s 00:07:43.404 user 0m2.218s 00:07:43.404 sys 0m0.581s 00:07:43.404 02:22:01 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.404 02:22:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:43.404 ************************************ 00:07:43.404 END TEST app_cmdline 00:07:43.404 ************************************ 00:07:43.404 02:22:02 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:43.404 02:22:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.404 02:22:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.404 02:22:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.404 ************************************ 00:07:43.404 START TEST version 00:07:43.404 ************************************ 00:07:43.404 02:22:02 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:43.665 * Looking for test storage... 00:07:43.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:43.665 02:22:02 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:43.665 02:22:02 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:43.665 02:22:02 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:43.665 02:22:02 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:43.665 02:22:02 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.665 02:22:02 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.665 02:22:02 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.665 02:22:02 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.665 02:22:02 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.665 02:22:02 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.665 02:22:02 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.665 02:22:02 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.665 02:22:02 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.665 02:22:02 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.665 02:22:02 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.665 02:22:02 version -- scripts/common.sh@344 -- # case "$op" in 00:07:43.665 02:22:02 version -- scripts/common.sh@345 -- # : 1 00:07:43.665 02:22:02 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.665 02:22:02 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.665 02:22:02 version -- scripts/common.sh@365 -- # decimal 1 00:07:43.665 02:22:02 version -- scripts/common.sh@353 -- # local d=1 00:07:43.665 02:22:02 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.665 02:22:02 version -- scripts/common.sh@355 -- # echo 1 00:07:43.665 02:22:02 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.665 02:22:02 version -- scripts/common.sh@366 -- # decimal 2 00:07:43.665 02:22:02 version -- scripts/common.sh@353 -- # local d=2 00:07:43.665 02:22:02 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.665 02:22:02 version -- scripts/common.sh@355 -- # echo 2 00:07:43.665 02:22:02 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.665 02:22:02 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.665 02:22:02 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.665 02:22:02 version -- scripts/common.sh@368 -- # return 0 00:07:43.665 02:22:02 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.665 02:22:02 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:43.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.665 --rc genhtml_branch_coverage=1 00:07:43.665 --rc genhtml_function_coverage=1 00:07:43.665 --rc genhtml_legend=1 00:07:43.665 --rc geninfo_all_blocks=1 00:07:43.665 --rc geninfo_unexecuted_blocks=1 00:07:43.665 00:07:43.665 ' 00:07:43.665 02:22:02 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:43.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.665 --rc genhtml_branch_coverage=1 00:07:43.665 --rc genhtml_function_coverage=1 00:07:43.665 --rc genhtml_legend=1 00:07:43.665 --rc geninfo_all_blocks=1 00:07:43.665 --rc geninfo_unexecuted_blocks=1 00:07:43.665 00:07:43.665 ' 00:07:43.665 02:22:02 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:43.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.665 --rc genhtml_branch_coverage=1 00:07:43.665 --rc genhtml_function_coverage=1 00:07:43.665 --rc genhtml_legend=1 00:07:43.665 --rc geninfo_all_blocks=1 00:07:43.665 --rc geninfo_unexecuted_blocks=1 00:07:43.665 00:07:43.665 ' 00:07:43.665 02:22:02 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:43.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.665 --rc genhtml_branch_coverage=1 00:07:43.665 --rc genhtml_function_coverage=1 00:07:43.665 --rc genhtml_legend=1 00:07:43.665 --rc geninfo_all_blocks=1 00:07:43.665 --rc geninfo_unexecuted_blocks=1 00:07:43.665 00:07:43.665 ' 00:07:43.665 02:22:02 version -- app/version.sh@17 -- # get_header_version major 00:07:43.665 02:22:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:43.665 02:22:02 version -- app/version.sh@14 -- # cut -f2 00:07:43.665 02:22:02 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.665 02:22:02 version -- app/version.sh@17 -- # major=24 00:07:43.665 02:22:02 version -- app/version.sh@18 -- # get_header_version minor 00:07:43.665 02:22:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:43.665 02:22:02 version -- app/version.sh@14 -- # cut -f2 00:07:43.665 02:22:02 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.665 02:22:02 version -- app/version.sh@18 -- # minor=9 00:07:43.665 02:22:02 version -- app/version.sh@19 -- # get_header_version patch 00:07:43.665 02:22:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:43.665 02:22:02 version -- app/version.sh@14 -- # cut -f2 00:07:43.665 02:22:02 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.665 02:22:02 version -- app/version.sh@19 -- # patch=1 00:07:43.665 02:22:02 version -- app/version.sh@20 -- # get_header_version suffix 00:07:43.665 02:22:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:43.665 02:22:02 version -- app/version.sh@14 -- # cut -f2 00:07:43.665 02:22:02 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.665 02:22:02 version -- app/version.sh@20 -- # suffix=-pre 00:07:43.665 02:22:02 version -- app/version.sh@22 -- # version=24.9 00:07:43.665 02:22:02 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:43.665 02:22:02 version -- app/version.sh@25 -- # version=24.9.1 00:07:43.665 02:22:02 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:43.665 02:22:02 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:43.665 02:22:02 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:43.924 02:22:02 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:43.924 02:22:02 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:43.924 ************************************ 00:07:43.924 END TEST version 00:07:43.924 ************************************ 00:07:43.924 00:07:43.924 real 0m0.311s 00:07:43.924 user 0m0.183s 00:07:43.924 sys 0m0.184s 00:07:43.924 02:22:02 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.924 02:22:02 version -- common/autotest_common.sh@10 -- # set +x 00:07:43.924 02:22:02 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:43.924 02:22:02 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:43.924 02:22:02 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:43.924 02:22:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.924 02:22:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.924 02:22:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.924 ************************************ 00:07:43.924 START TEST bdev_raid 00:07:43.924 ************************************ 00:07:43.924 02:22:02 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:43.924 * Looking for test storage... 00:07:43.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:43.924 02:22:02 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:43.924 02:22:02 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:07:43.924 02:22:02 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:44.219 02:22:02 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.219 02:22:02 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:44.219 02:22:02 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.219 02:22:02 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:44.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.219 --rc genhtml_branch_coverage=1 00:07:44.219 --rc genhtml_function_coverage=1 00:07:44.219 --rc genhtml_legend=1 00:07:44.219 --rc geninfo_all_blocks=1 00:07:44.219 --rc geninfo_unexecuted_blocks=1 00:07:44.219 00:07:44.219 ' 00:07:44.219 02:22:02 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:44.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.219 --rc genhtml_branch_coverage=1 00:07:44.219 --rc genhtml_function_coverage=1 00:07:44.219 --rc genhtml_legend=1 00:07:44.219 --rc geninfo_all_blocks=1 00:07:44.219 --rc geninfo_unexecuted_blocks=1 00:07:44.219 00:07:44.219 ' 00:07:44.219 02:22:02 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:44.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.219 --rc genhtml_branch_coverage=1 00:07:44.219 --rc genhtml_function_coverage=1 00:07:44.219 --rc genhtml_legend=1 00:07:44.219 --rc geninfo_all_blocks=1 00:07:44.219 --rc geninfo_unexecuted_blocks=1 00:07:44.219 00:07:44.219 ' 00:07:44.219 02:22:02 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:44.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.219 --rc genhtml_branch_coverage=1 00:07:44.219 --rc genhtml_function_coverage=1 00:07:44.219 --rc genhtml_legend=1 00:07:44.219 --rc geninfo_all_blocks=1 00:07:44.219 --rc geninfo_unexecuted_blocks=1 00:07:44.219 00:07:44.219 ' 00:07:44.219 02:22:02 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:44.219 02:22:02 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:44.219 02:22:02 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:44.219 02:22:02 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:44.219 02:22:02 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:44.219 02:22:02 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:44.219 02:22:02 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:44.219 02:22:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.219 02:22:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.219 02:22:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:44.219 ************************************ 00:07:44.219 START TEST raid1_resize_data_offset_test 00:07:44.219 ************************************ 00:07:44.219 02:22:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:07:44.219 02:22:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71458 00:07:44.219 02:22:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71458' 00:07:44.219 Process raid pid: 71458 00:07:44.219 02:22:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71458 00:07:44.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.219 02:22:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:44.219 02:22:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 71458 ']' 00:07:44.219 02:22:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.219 02:22:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.219 02:22:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.219 02:22:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.219 02:22:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.219 [2024-10-13 02:22:02.759205] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:44.219 [2024-10-13 02:22:02.759330] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.516 [2024-10-13 02:22:02.906374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.516 [2024-10-13 02:22:02.951454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.516 [2024-10-13 02:22:02.993587] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.516 [2024-10-13 02:22:02.993700] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.086 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.086 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:07:45.086 02:22:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:45.086 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.086 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.086 malloc0 00:07:45.086 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.086 02:22:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:45.086 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.086 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.086 malloc1 00:07:45.086 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.086 02:22:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:45.086 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.086 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.086 null0 00:07:45.087 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.087 02:22:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:45.087 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.087 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.087 [2024-10-13 02:22:03.669136] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:45.087 [2024-10-13 02:22:03.671011] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:45.087 [2024-10-13 02:22:03.671065] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:45.087 [2024-10-13 02:22:03.671195] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:45.087 [2024-10-13 02:22:03.671219] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:45.087 [2024-10-13 02:22:03.671473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:07:45.087 [2024-10-13 02:22:03.671616] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:45.087 [2024-10-13 02:22:03.671630] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:45.087 [2024-10-13 02:22:03.671763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.087 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.087 02:22:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.087 02:22:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:45.087 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.087 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.087 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.087 02:22:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:45.087 02:22:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:45.087 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.087 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.087 [2024-10-13 02:22:03.729036] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:45.087 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.087 02:22:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:45.087 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.087 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.347 malloc2 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.347 [2024-10-13 02:22:03.856553] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:45.347 [2024-10-13 02:22:03.860941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.347 [2024-10-13 02:22:03.862837] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71458 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 71458 ']' 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 71458 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71458 00:07:45.347 killing process with pid 71458 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71458' 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 71458 00:07:45.347 [2024-10-13 02:22:03.943435] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.347 02:22:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 71458 00:07:45.347 [2024-10-13 02:22:03.943610] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:45.347 [2024-10-13 02:22:03.943668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.347 [2024-10-13 02:22:03.943686] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:45.347 [2024-10-13 02:22:03.949431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.347 [2024-10-13 02:22:03.949788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.347 [2024-10-13 02:22:03.949810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:45.607 [2024-10-13 02:22:04.161762] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.867 02:22:04 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:45.867 00:07:45.867 real 0m1.727s 00:07:45.867 user 0m1.725s 00:07:45.867 sys 0m0.452s 00:07:45.867 02:22:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.867 ************************************ 00:07:45.867 END TEST raid1_resize_data_offset_test 00:07:45.867 ************************************ 00:07:45.867 02:22:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.867 02:22:04 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:45.867 02:22:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:45.867 02:22:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.867 02:22:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.867 ************************************ 00:07:45.867 START TEST raid0_resize_superblock_test 00:07:45.867 ************************************ 00:07:45.867 02:22:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:07:45.867 02:22:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:45.867 02:22:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71514 00:07:45.867 Process raid pid: 71514 00:07:45.867 02:22:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:45.867 02:22:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71514' 00:07:45.867 02:22:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71514 00:07:45.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.867 02:22:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71514 ']' 00:07:45.867 02:22:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.867 02:22:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.867 02:22:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.867 02:22:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.867 02:22:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.126 [2024-10-13 02:22:04.562042] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:46.126 [2024-10-13 02:22:04.562274] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.126 [2024-10-13 02:22:04.709726] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.126 [2024-10-13 02:22:04.755564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.126 [2024-10-13 02:22:04.797786] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.126 [2024-10-13 02:22:04.797919] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.066 malloc0 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.066 [2024-10-13 02:22:05.501287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:47.066 [2024-10-13 02:22:05.501350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.066 [2024-10-13 02:22:05.501371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:47.066 [2024-10-13 02:22:05.501381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.066 [2024-10-13 02:22:05.503526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.066 [2024-10-13 02:22:05.503571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:47.066 pt0 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.066 bf65d627-0a86-463f-b5fa-147ec817f256 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.066 a9021668-b67f-4911-a04a-76ffa01ca300 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.066 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.067 ab38add4-02e4-4854-8e0d-de35b93c7346 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.067 [2024-10-13 02:22:05.638547] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev a9021668-b67f-4911-a04a-76ffa01ca300 is claimed 00:07:47.067 [2024-10-13 02:22:05.638621] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev ab38add4-02e4-4854-8e0d-de35b93c7346 is claimed 00:07:47.067 [2024-10-13 02:22:05.638736] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:47.067 [2024-10-13 02:22:05.638749] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:47.067 [2024-10-13 02:22:05.639055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:47.067 [2024-10-13 02:22:05.639199] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:47.067 [2024-10-13 02:22:05.639226] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:47.067 [2024-10-13 02:22:05.639361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.067 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.328 [2024-10-13 02:22:05.754558] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.328 [2024-10-13 02:22:05.794408] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:47.328 [2024-10-13 02:22:05.794479] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'a9021668-b67f-4911-a04a-76ffa01ca300' was resized: old size 131072, new size 204800 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.328 [2024-10-13 02:22:05.806321] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:47.328 [2024-10-13 02:22:05.806342] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ab38add4-02e4-4854-8e0d-de35b93c7346' was resized: old size 131072, new size 204800 00:07:47.328 [2024-10-13 02:22:05.806368] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.328 [2024-10-13 02:22:05.918227] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.328 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.328 [2024-10-13 02:22:05.961993] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:47.328 [2024-10-13 02:22:05.962102] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:47.328 [2024-10-13 02:22:05.962122] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.328 [2024-10-13 02:22:05.962140] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:47.328 [2024-10-13 02:22:05.962256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.329 [2024-10-13 02:22:05.962297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.329 [2024-10-13 02:22:05.962308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:47.329 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.329 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:47.329 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.329 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.329 [2024-10-13 02:22:05.973921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:47.329 [2024-10-13 02:22:05.973965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.329 [2024-10-13 02:22:05.973982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:47.329 [2024-10-13 02:22:05.973993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.329 [2024-10-13 02:22:05.976044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.329 [2024-10-13 02:22:05.976081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:47.329 [2024-10-13 02:22:05.977393] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev a9021668-b67f-4911-a04a-76ffa01ca300 00:07:47.329 [2024-10-13 02:22:05.977495] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev a9021668-b67f-4911-a04a-76ffa01ca300 is claimed 00:07:47.329 [2024-10-13 02:22:05.977576] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ab38add4-02e4-4854-8e0d-de35b93c7346 00:07:47.329 [2024-10-13 02:22:05.977599] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev ab38add4-02e4-4854-8e0d-de35b93c7346 is claimed 00:07:47.329 [2024-10-13 02:22:05.977674] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev ab38add4-02e4-4854-8e0d-de35b93c7346 (2) smaller than existing raid bdev Raid (3) 00:07:47.329 [2024-10-13 02:22:05.977692] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev a9021668-b67f-4911-a04a-76ffa01ca300: File exists 00:07:47.329 [2024-10-13 02:22:05.977729] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:07:47.329 [2024-10-13 02:22:05.977738] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:47.329 [2024-10-13 02:22:05.977968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:07:47.329 [2024-10-13 02:22:05.978108] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:07:47.329 [2024-10-13 02:22:05.978117] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:07:47.329 [2024-10-13 02:22:05.978255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.329 pt0 00:07:47.329 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.329 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:47.329 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.329 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.329 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.329 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:47.329 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:47.329 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:47.329 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:47.329 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.329 02:22:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.329 [2024-10-13 02:22:06.002391] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.589 02:22:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.589 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:47.589 02:22:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:47.589 02:22:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:47.589 02:22:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71514 00:07:47.589 02:22:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71514 ']' 00:07:47.589 02:22:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71514 00:07:47.589 02:22:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:47.589 02:22:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.589 02:22:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71514 00:07:47.589 killing process with pid 71514 00:07:47.589 02:22:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:47.589 02:22:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:47.589 02:22:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71514' 00:07:47.589 02:22:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71514 00:07:47.589 [2024-10-13 02:22:06.083883] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:47.589 [2024-10-13 02:22:06.083955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.589 [2024-10-13 02:22:06.083997] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.589 [2024-10-13 02:22:06.084005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:07:47.590 02:22:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71514 00:07:47.590 [2024-10-13 02:22:06.243935] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.849 ************************************ 00:07:47.849 END TEST raid0_resize_superblock_test 00:07:47.849 ************************************ 00:07:47.849 02:22:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:47.849 00:07:47.849 real 0m2.018s 00:07:47.849 user 0m2.288s 00:07:47.849 sys 0m0.518s 00:07:47.849 02:22:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.849 02:22:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.109 02:22:06 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:48.109 02:22:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:48.109 02:22:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.109 02:22:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.109 ************************************ 00:07:48.109 START TEST raid1_resize_superblock_test 00:07:48.109 ************************************ 00:07:48.109 Process raid pid: 71585 00:07:48.109 02:22:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:07:48.109 02:22:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:48.109 02:22:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71585 00:07:48.109 02:22:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:48.110 02:22:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71585' 00:07:48.110 02:22:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71585 00:07:48.110 02:22:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71585 ']' 00:07:48.110 02:22:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.110 02:22:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.110 02:22:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.110 02:22:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.110 02:22:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.110 [2024-10-13 02:22:06.647018] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:48.110 [2024-10-13 02:22:06.647265] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.110 [2024-10-13 02:22:06.776432] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.370 [2024-10-13 02:22:06.821536] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.370 [2024-10-13 02:22:06.864716] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.370 [2024-10-13 02:22:06.864829] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.939 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.939 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:48.939 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:48.939 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.939 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.939 malloc0 00:07:48.939 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.939 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:48.939 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.939 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.939 [2024-10-13 02:22:07.593136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:48.939 [2024-10-13 02:22:07.593245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.939 [2024-10-13 02:22:07.593284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:48.939 [2024-10-13 02:22:07.593314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.939 [2024-10-13 02:22:07.595408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.939 [2024-10-13 02:22:07.595487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:48.939 pt0 00:07:48.940 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.940 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:48.940 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.940 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.199 d69398aa-9132-4c78-851d-a8e132022516 00:07:49.199 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.199 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:49.199 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.199 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.199 3bc98610-2ab6-46c4-8132-b3e33a80d84a 00:07:49.199 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.199 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:49.199 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.199 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.199 e807b030-6f13-4e9e-899f-8c8f7792a407 00:07:49.199 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.199 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:49.199 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:49.199 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.199 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.199 [2024-10-13 02:22:07.729456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3bc98610-2ab6-46c4-8132-b3e33a80d84a is claimed 00:07:49.199 [2024-10-13 02:22:07.729530] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev e807b030-6f13-4e9e-899f-8c8f7792a407 is claimed 00:07:49.199 [2024-10-13 02:22:07.729641] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:49.199 [2024-10-13 02:22:07.729656] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:49.199 [2024-10-13 02:22:07.729913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:49.199 [2024-10-13 02:22:07.730065] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:49.199 [2024-10-13 02:22:07.730076] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:49.199 [2024-10-13 02:22:07.730206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.199 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.199 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:49.199 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:49.199 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.199 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.200 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.200 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:49.200 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:49.200 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:49.200 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.200 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.200 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.200 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:49.200 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:49.200 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:49.200 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:49.200 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:49.200 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.200 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.200 [2024-10-13 02:22:07.841465] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.200 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.200 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:49.467 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:49.467 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:49.467 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:49.467 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.467 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.467 [2024-10-13 02:22:07.889325] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:49.467 [2024-10-13 02:22:07.889392] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '3bc98610-2ab6-46c4-8132-b3e33a80d84a' was resized: old size 131072, new size 204800 00:07:49.467 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.467 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:49.467 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.468 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.468 [2024-10-13 02:22:07.901250] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:49.468 [2024-10-13 02:22:07.901310] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e807b030-6f13-4e9e-899f-8c8f7792a407' was resized: old size 131072, new size 204800 00:07:49.468 [2024-10-13 02:22:07.901367] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:49.468 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.468 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:49.468 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:49.468 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.468 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.468 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.468 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:49.468 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:49.468 02:22:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:49.468 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.468 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.468 02:22:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.468 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:49.468 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:49.468 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:49.468 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:49.468 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:49.468 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.468 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.468 [2024-10-13 02:22:08.013141] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.468 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.469 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:49.469 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:49.469 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:49.469 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:49.469 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.469 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.469 [2024-10-13 02:22:08.056901] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:49.469 [2024-10-13 02:22:08.056962] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:49.469 [2024-10-13 02:22:08.056989] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:49.469 [2024-10-13 02:22:08.057129] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.469 [2024-10-13 02:22:08.057263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.469 [2024-10-13 02:22:08.057312] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.469 [2024-10-13 02:22:08.057323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:49.469 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.469 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:49.469 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.469 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.469 [2024-10-13 02:22:08.068828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:49.469 [2024-10-13 02:22:08.068927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.469 [2024-10-13 02:22:08.068948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:49.470 [2024-10-13 02:22:08.068959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.470 [2024-10-13 02:22:08.071003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.470 [2024-10-13 02:22:08.071040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:49.470 [2024-10-13 02:22:08.072366] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 3bc98610-2ab6-46c4-8132-b3e33a80d84a 00:07:49.470 [2024-10-13 02:22:08.072469] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3bc98610-2ab6-46c4-8132-b3e33a80d84a is claimed 00:07:49.470 [2024-10-13 02:22:08.072548] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e807b030-6f13-4e9e-899f-8c8f7792a407 00:07:49.470 [2024-10-13 02:22:08.072569] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev e807b030-6f13-4e9e-899f-8c8f7792a407 is claimed 00:07:49.470 [2024-10-13 02:22:08.072660] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev e807b030-6f13-4e9e-899f-8c8f7792a407 (2) smaller than existing raid bdev Raid (3) 00:07:49.470 [2024-10-13 02:22:08.072681] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 3bc98610-2ab6-46c4-8132-b3e33a80d84a: File exists 00:07:49.470 [2024-10-13 02:22:08.072730] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:07:49.470 [2024-10-13 02:22:08.072739] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:49.470 [2024-10-13 02:22:08.072971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:07:49.470 [2024-10-13 02:22:08.073111] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:07:49.470 [2024-10-13 02:22:08.073121] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:07:49.470 [2024-10-13 02:22:08.073258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.470 pt0 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.470 [2024-10-13 02:22:08.097406] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71585 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71585 ']' 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71585 00:07:49.470 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:49.471 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.733 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71585 00:07:49.733 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.733 killing process with pid 71585 00:07:49.733 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.733 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71585' 00:07:49.733 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71585 00:07:49.733 [2024-10-13 02:22:08.174653] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.733 [2024-10-13 02:22:08.174714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.733 [2024-10-13 02:22:08.174754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.734 [2024-10-13 02:22:08.174762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:07:49.734 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71585 00:07:49.734 [2024-10-13 02:22:08.334483] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.994 02:22:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:49.994 00:07:49.994 real 0m2.018s 00:07:49.994 user 0m2.316s 00:07:49.994 sys 0m0.502s 00:07:49.994 ************************************ 00:07:49.994 END TEST raid1_resize_superblock_test 00:07:49.994 ************************************ 00:07:49.994 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.994 02:22:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.994 02:22:08 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:49.994 02:22:08 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:49.994 02:22:08 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:49.994 02:22:08 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:49.994 02:22:08 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:49.994 02:22:08 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:49.994 02:22:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:49.994 02:22:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.994 02:22:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.994 ************************************ 00:07:49.994 START TEST raid_function_test_raid0 00:07:49.994 ************************************ 00:07:49.994 02:22:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:07:49.994 02:22:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:49.994 02:22:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:49.994 02:22:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:49.994 Process raid pid: 71660 00:07:49.994 02:22:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=71660 00:07:49.994 02:22:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71660' 00:07:49.994 02:22:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 71660 00:07:49.994 02:22:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 71660 ']' 00:07:49.994 02:22:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.994 02:22:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.994 02:22:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.994 02:22:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.994 02:22:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:49.994 02:22:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:50.254 [2024-10-13 02:22:08.759505] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:50.254 [2024-10-13 02:22:08.759743] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.254 [2024-10-13 02:22:08.908998] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.514 [2024-10-13 02:22:08.956911] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.514 [2024-10-13 02:22:08.999703] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.514 [2024-10-13 02:22:08.999819] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:51.083 Base_1 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:51.083 Base_2 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:51.083 [2024-10-13 02:22:09.641435] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:51.083 [2024-10-13 02:22:09.644674] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:51.083 [2024-10-13 02:22:09.644773] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:51.083 [2024-10-13 02:22:09.644792] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:51.083 [2024-10-13 02:22:09.645233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:51.083 [2024-10-13 02:22:09.645407] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:51.083 [2024-10-13 02:22:09.645423] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:07:51.083 [2024-10-13 02:22:09.645681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:51.083 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:51.343 [2024-10-13 02:22:09.881267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:51.343 /dev/nbd0 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:51.343 1+0 records in 00:07:51.343 1+0 records out 00:07:51.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029401 s, 13.9 MB/s 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:51.343 02:22:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:51.603 { 00:07:51.603 "nbd_device": "/dev/nbd0", 00:07:51.603 "bdev_name": "raid" 00:07:51.603 } 00:07:51.603 ]' 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:51.603 { 00:07:51.603 "nbd_device": "/dev/nbd0", 00:07:51.603 "bdev_name": "raid" 00:07:51.603 } 00:07:51.603 ]' 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:51.603 4096+0 records in 00:07:51.603 4096+0 records out 00:07:51.603 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0362971 s, 57.8 MB/s 00:07:51.603 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:51.862 4096+0 records in 00:07:51.862 4096+0 records out 00:07:51.862 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.212778 s, 9.9 MB/s 00:07:51.862 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:51.862 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:51.862 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:51.862 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:51.862 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:51.862 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:51.862 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:51.862 128+0 records in 00:07:51.862 128+0 records out 00:07:51.862 65536 bytes (66 kB, 64 KiB) copied, 0.00123521 s, 53.1 MB/s 00:07:51.863 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:51.863 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:51.863 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:51.863 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:51.863 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:51.863 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:51.863 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:51.863 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:51.863 2035+0 records in 00:07:51.863 2035+0 records out 00:07:51.863 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.01346 s, 77.4 MB/s 00:07:51.863 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:52.122 456+0 records in 00:07:52.122 456+0 records out 00:07:52.122 233472 bytes (233 kB, 228 KiB) copied, 0.00395513 s, 59.0 MB/s 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.122 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:52.382 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:52.382 [2024-10-13 02:22:10.812077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.382 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:52.382 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:52.382 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.382 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.382 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:52.382 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:52.382 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.382 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:52.382 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:52.382 02:22:10 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:52.382 02:22:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:52.382 02:22:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:52.382 02:22:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 71660 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 71660 ']' 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 71660 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71660 00:07:52.642 killing process with pid 71660 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71660' 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 71660 00:07:52.642 [2024-10-13 02:22:11.147492] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.642 [2024-10-13 02:22:11.147616] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.642 02:22:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 71660 00:07:52.642 [2024-10-13 02:22:11.147675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.642 [2024-10-13 02:22:11.147693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:07:52.642 [2024-10-13 02:22:11.170150] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.900 02:22:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:52.900 00:07:52.900 real 0m2.742s 00:07:52.900 user 0m3.299s 00:07:52.900 sys 0m1.012s 00:07:52.900 02:22:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.900 ************************************ 00:07:52.900 END TEST raid_function_test_raid0 00:07:52.900 ************************************ 00:07:52.900 02:22:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:52.900 02:22:11 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:52.900 02:22:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:52.900 02:22:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.900 02:22:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.900 ************************************ 00:07:52.900 START TEST raid_function_test_concat 00:07:52.900 ************************************ 00:07:52.900 02:22:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:07:52.900 02:22:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:52.900 02:22:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:52.900 02:22:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:52.900 02:22:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=71775 00:07:52.900 Process raid pid: 71775 00:07:52.900 02:22:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:52.900 02:22:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71775' 00:07:52.900 02:22:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 71775 00:07:52.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.901 02:22:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 71775 ']' 00:07:52.901 02:22:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.901 02:22:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.901 02:22:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.901 02:22:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.901 02:22:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:52.901 [2024-10-13 02:22:11.570669] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:52.901 [2024-10-13 02:22:11.570937] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.160 [2024-10-13 02:22:11.716009] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.160 [2024-10-13 02:22:11.762239] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.160 [2024-10-13 02:22:11.805237] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.160 [2024-10-13 02:22:11.805274] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.758 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.758 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:07:53.758 02:22:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:53.758 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.758 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:53.758 Base_1 00:07:53.758 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.758 02:22:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:53.758 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.758 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:53.758 Base_2 00:07:53.758 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.758 02:22:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:53.758 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.758 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:53.758 [2024-10-13 02:22:12.425703] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:53.758 [2024-10-13 02:22:12.427595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:53.758 [2024-10-13 02:22:12.427705] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:53.758 [2024-10-13 02:22:12.427747] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:53.758 [2024-10-13 02:22:12.428042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:53.758 [2024-10-13 02:22:12.428240] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:53.758 [2024-10-13 02:22:12.428285] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:07:53.758 [2024-10-13 02:22:12.428463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.758 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.017 02:22:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:54.018 02:22:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:54.018 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.018 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:54.018 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.018 02:22:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:54.018 02:22:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:54.018 02:22:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:54.018 02:22:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:54.018 02:22:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:54.018 02:22:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:54.018 02:22:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:54.018 02:22:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:54.018 02:22:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:54.018 02:22:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:54.018 02:22:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:54.018 02:22:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:54.018 [2024-10-13 02:22:12.673321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:54.018 /dev/nbd0 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.277 1+0 records in 00:07:54.277 1+0 records out 00:07:54.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567201 s, 7.2 MB/s 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:54.277 02:22:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:54.536 02:22:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:54.536 { 00:07:54.536 "nbd_device": "/dev/nbd0", 00:07:54.536 "bdev_name": "raid" 00:07:54.536 } 00:07:54.536 ]' 00:07:54.536 02:22:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:54.536 { 00:07:54.536 "nbd_device": "/dev/nbd0", 00:07:54.536 "bdev_name": "raid" 00:07:54.536 } 00:07:54.536 ]' 00:07:54.536 02:22:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:54.536 4096+0 records in 00:07:54.536 4096+0 records out 00:07:54.536 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0343596 s, 61.0 MB/s 00:07:54.536 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:54.796 4096+0 records in 00:07:54.796 4096+0 records out 00:07:54.796 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.182071 s, 11.5 MB/s 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:54.796 128+0 records in 00:07:54.796 128+0 records out 00:07:54.796 65536 bytes (66 kB, 64 KiB) copied, 0.00117311 s, 55.9 MB/s 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:54.796 2035+0 records in 00:07:54.796 2035+0 records out 00:07:54.796 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0144632 s, 72.0 MB/s 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:54.796 456+0 records in 00:07:54.796 456+0 records out 00:07:54.796 233472 bytes (233 kB, 228 KiB) copied, 0.00361977 s, 64.5 MB/s 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.796 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:55.056 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:55.056 [2024-10-13 02:22:13.586216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.056 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:55.056 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:55.056 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.056 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.056 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:55.056 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:55.056 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.056 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:55.056 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:55.056 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 71775 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 71775 ']' 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 71775 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71775 00:07:55.316 killing process with pid 71775 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71775' 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 71775 00:07:55.316 [2024-10-13 02:22:13.908417] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:55.316 [2024-10-13 02:22:13.908541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.316 02:22:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 71775 00:07:55.316 [2024-10-13 02:22:13.908599] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.316 [2024-10-13 02:22:13.908610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:07:55.316 [2024-10-13 02:22:13.931401] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.576 ************************************ 00:07:55.576 END TEST raid_function_test_concat 00:07:55.576 ************************************ 00:07:55.576 02:22:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:55.576 00:07:55.576 real 0m2.677s 00:07:55.576 user 0m3.358s 00:07:55.576 sys 0m0.879s 00:07:55.576 02:22:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.576 02:22:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:55.576 02:22:14 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:55.576 02:22:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:55.576 02:22:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.576 02:22:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.576 ************************************ 00:07:55.576 START TEST raid0_resize_test 00:07:55.576 ************************************ 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=71892 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 71892' 00:07:55.576 Process raid pid: 71892 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 71892 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 71892 ']' 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.576 02:22:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.835 [2024-10-13 02:22:14.318467] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:55.835 [2024-10-13 02:22:14.318666] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.835 [2024-10-13 02:22:14.463477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.835 [2024-10-13 02:22:14.510114] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.094 [2024-10-13 02:22:14.552010] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.094 [2024-10-13 02:22:14.552046] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.663 Base_1 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.663 Base_2 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.663 [2024-10-13 02:22:15.177390] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:56.663 [2024-10-13 02:22:15.179176] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:56.663 [2024-10-13 02:22:15.179231] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:56.663 [2024-10-13 02:22:15.179241] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:56.663 [2024-10-13 02:22:15.179498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:07:56.663 [2024-10-13 02:22:15.179588] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:56.663 [2024-10-13 02:22:15.179601] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:56.663 [2024-10-13 02:22:15.179709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.663 [2024-10-13 02:22:15.189345] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:56.663 [2024-10-13 02:22:15.189405] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:56.663 true 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:56.663 [2024-10-13 02:22:15.201487] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.663 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.663 [2024-10-13 02:22:15.253226] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:56.664 [2024-10-13 02:22:15.253284] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:56.664 [2024-10-13 02:22:15.253326] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:56.664 true 00:07:56.664 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.664 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:56.664 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.664 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.664 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:56.664 [2024-10-13 02:22:15.265358] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.664 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.664 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:56.664 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:56.664 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:56.664 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:56.664 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:56.664 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 71892 00:07:56.664 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 71892 ']' 00:07:56.664 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 71892 00:07:56.664 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:56.664 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.664 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71892 00:07:56.923 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.923 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.923 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71892' 00:07:56.923 killing process with pid 71892 00:07:56.923 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 71892 00:07:56.923 [2024-10-13 02:22:15.355570] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.923 [2024-10-13 02:22:15.355692] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.923 [2024-10-13 02:22:15.355770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 71892 00:07:56.923 ee all in destruct 00:07:56.923 [2024-10-13 02:22:15.355831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:56.923 [2024-10-13 02:22:15.357322] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.923 02:22:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:56.923 00:07:56.923 real 0m1.353s 00:07:56.923 user 0m1.520s 00:07:56.923 sys 0m0.293s 00:07:56.923 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.923 02:22:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.923 ************************************ 00:07:56.923 END TEST raid0_resize_test 00:07:56.923 ************************************ 00:07:57.183 02:22:15 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:57.183 02:22:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:57.183 02:22:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.183 02:22:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.183 ************************************ 00:07:57.183 START TEST raid1_resize_test 00:07:57.183 ************************************ 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=71937 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 71937' 00:07:57.183 Process raid pid: 71937 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 71937 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 71937 ']' 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.183 02:22:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.183 [2024-10-13 02:22:15.753801] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:57.183 [2024-10-13 02:22:15.754077] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.442 [2024-10-13 02:22:15.900478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.442 [2024-10-13 02:22:15.944881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.442 [2024-10-13 02:22:15.986966] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.442 [2024-10-13 02:22:15.987078] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.013 Base_1 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.013 Base_2 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.013 [2024-10-13 02:22:16.588546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:58.013 [2024-10-13 02:22:16.590268] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:58.013 [2024-10-13 02:22:16.590325] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:58.013 [2024-10-13 02:22:16.590336] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:58.013 [2024-10-13 02:22:16.590588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:07:58.013 [2024-10-13 02:22:16.590692] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:58.013 [2024-10-13 02:22:16.590701] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:58.013 [2024-10-13 02:22:16.590806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.013 [2024-10-13 02:22:16.596510] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:58.013 [2024-10-13 02:22:16.596547] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:58.013 true 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.013 [2024-10-13 02:22:16.612663] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.013 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.014 [2024-10-13 02:22:16.672387] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:58.014 [2024-10-13 02:22:16.672468] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:58.014 [2024-10-13 02:22:16.672502] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:58.014 true 00:07:58.014 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.014 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:58.014 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.014 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.014 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:58.014 [2024-10-13 02:22:16.684522] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.272 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.272 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:58.272 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:58.272 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:58.272 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:58.272 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:58.272 02:22:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 71937 00:07:58.272 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 71937 ']' 00:07:58.272 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 71937 00:07:58.272 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:58.272 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.272 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71937 00:07:58.272 killing process with pid 71937 00:07:58.272 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:58.272 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:58.272 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71937' 00:07:58.272 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 71937 00:07:58.272 [2024-10-13 02:22:16.771690] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.272 [2024-10-13 02:22:16.771779] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.272 02:22:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 71937 00:07:58.272 [2024-10-13 02:22:16.772199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.272 [2024-10-13 02:22:16.772222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:58.272 [2024-10-13 02:22:16.773344] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.532 ************************************ 00:07:58.532 END TEST raid1_resize_test 00:07:58.532 ************************************ 00:07:58.532 02:22:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:58.532 00:07:58.532 real 0m1.353s 00:07:58.532 user 0m1.514s 00:07:58.532 sys 0m0.314s 00:07:58.532 02:22:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.532 02:22:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.532 02:22:17 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:58.532 02:22:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:58.532 02:22:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:58.532 02:22:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:58.532 02:22:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.532 02:22:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:58.532 ************************************ 00:07:58.532 START TEST raid_state_function_test 00:07:58.532 ************************************ 00:07:58.532 02:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:07:58.532 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:58.532 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:58.532 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:58.532 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:58.532 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:58.532 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.532 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:58.532 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:58.532 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.532 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:58.532 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71988 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71988' 00:07:58.533 Process raid pid: 71988 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71988 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71988 ']' 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:58.533 02:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.533 [2024-10-13 02:22:17.187624] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:58.533 [2024-10-13 02:22:17.187854] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.792 [2024-10-13 02:22:17.335008] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.792 [2024-10-13 02:22:17.381119] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.792 [2024-10-13 02:22:17.423836] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.792 [2024-10-13 02:22:17.423971] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.361 [2024-10-13 02:22:18.017528] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:59.361 [2024-10-13 02:22:18.017651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:59.361 [2024-10-13 02:22:18.017694] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:59.361 [2024-10-13 02:22:18.017717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.361 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.620 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.620 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.620 "name": "Existed_Raid", 00:07:59.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.620 "strip_size_kb": 64, 00:07:59.620 "state": "configuring", 00:07:59.620 "raid_level": "raid0", 00:07:59.620 "superblock": false, 00:07:59.620 "num_base_bdevs": 2, 00:07:59.620 "num_base_bdevs_discovered": 0, 00:07:59.620 "num_base_bdevs_operational": 2, 00:07:59.620 "base_bdevs_list": [ 00:07:59.620 { 00:07:59.620 "name": "BaseBdev1", 00:07:59.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.620 "is_configured": false, 00:07:59.620 "data_offset": 0, 00:07:59.620 "data_size": 0 00:07:59.620 }, 00:07:59.620 { 00:07:59.620 "name": "BaseBdev2", 00:07:59.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.620 "is_configured": false, 00:07:59.620 "data_offset": 0, 00:07:59.620 "data_size": 0 00:07:59.620 } 00:07:59.620 ] 00:07:59.620 }' 00:07:59.620 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.620 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.879 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:59.879 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.879 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.879 [2024-10-13 02:22:18.448710] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:59.880 [2024-10-13 02:22:18.448825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.880 [2024-10-13 02:22:18.460695] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:59.880 [2024-10-13 02:22:18.460781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:59.880 [2024-10-13 02:22:18.460802] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:59.880 [2024-10-13 02:22:18.460813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.880 [2024-10-13 02:22:18.481579] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:59.880 BaseBdev1 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.880 [ 00:07:59.880 { 00:07:59.880 "name": "BaseBdev1", 00:07:59.880 "aliases": [ 00:07:59.880 "52a7865a-94cb-4b8a-9289-9dbed97edcde" 00:07:59.880 ], 00:07:59.880 "product_name": "Malloc disk", 00:07:59.880 "block_size": 512, 00:07:59.880 "num_blocks": 65536, 00:07:59.880 "uuid": "52a7865a-94cb-4b8a-9289-9dbed97edcde", 00:07:59.880 "assigned_rate_limits": { 00:07:59.880 "rw_ios_per_sec": 0, 00:07:59.880 "rw_mbytes_per_sec": 0, 00:07:59.880 "r_mbytes_per_sec": 0, 00:07:59.880 "w_mbytes_per_sec": 0 00:07:59.880 }, 00:07:59.880 "claimed": true, 00:07:59.880 "claim_type": "exclusive_write", 00:07:59.880 "zoned": false, 00:07:59.880 "supported_io_types": { 00:07:59.880 "read": true, 00:07:59.880 "write": true, 00:07:59.880 "unmap": true, 00:07:59.880 "flush": true, 00:07:59.880 "reset": true, 00:07:59.880 "nvme_admin": false, 00:07:59.880 "nvme_io": false, 00:07:59.880 "nvme_io_md": false, 00:07:59.880 "write_zeroes": true, 00:07:59.880 "zcopy": true, 00:07:59.880 "get_zone_info": false, 00:07:59.880 "zone_management": false, 00:07:59.880 "zone_append": false, 00:07:59.880 "compare": false, 00:07:59.880 "compare_and_write": false, 00:07:59.880 "abort": true, 00:07:59.880 "seek_hole": false, 00:07:59.880 "seek_data": false, 00:07:59.880 "copy": true, 00:07:59.880 "nvme_iov_md": false 00:07:59.880 }, 00:07:59.880 "memory_domains": [ 00:07:59.880 { 00:07:59.880 "dma_device_id": "system", 00:07:59.880 "dma_device_type": 1 00:07:59.880 }, 00:07:59.880 { 00:07:59.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.880 "dma_device_type": 2 00:07:59.880 } 00:07:59.880 ], 00:07:59.880 "driver_specific": {} 00:07:59.880 } 00:07:59.880 ] 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.880 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.140 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.140 "name": "Existed_Raid", 00:08:00.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.140 "strip_size_kb": 64, 00:08:00.140 "state": "configuring", 00:08:00.140 "raid_level": "raid0", 00:08:00.140 "superblock": false, 00:08:00.140 "num_base_bdevs": 2, 00:08:00.140 "num_base_bdevs_discovered": 1, 00:08:00.140 "num_base_bdevs_operational": 2, 00:08:00.140 "base_bdevs_list": [ 00:08:00.140 { 00:08:00.140 "name": "BaseBdev1", 00:08:00.140 "uuid": "52a7865a-94cb-4b8a-9289-9dbed97edcde", 00:08:00.140 "is_configured": true, 00:08:00.140 "data_offset": 0, 00:08:00.140 "data_size": 65536 00:08:00.140 }, 00:08:00.140 { 00:08:00.140 "name": "BaseBdev2", 00:08:00.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.140 "is_configured": false, 00:08:00.140 "data_offset": 0, 00:08:00.140 "data_size": 0 00:08:00.140 } 00:08:00.140 ] 00:08:00.140 }' 00:08:00.140 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.140 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.400 [2024-10-13 02:22:18.952841] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:00.400 [2024-10-13 02:22:18.952990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.400 [2024-10-13 02:22:18.964870] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.400 [2024-10-13 02:22:18.966856] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.400 [2024-10-13 02:22:18.966950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.400 02:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.400 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.400 "name": "Existed_Raid", 00:08:00.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.400 "strip_size_kb": 64, 00:08:00.400 "state": "configuring", 00:08:00.400 "raid_level": "raid0", 00:08:00.400 "superblock": false, 00:08:00.400 "num_base_bdevs": 2, 00:08:00.400 "num_base_bdevs_discovered": 1, 00:08:00.400 "num_base_bdevs_operational": 2, 00:08:00.400 "base_bdevs_list": [ 00:08:00.400 { 00:08:00.400 "name": "BaseBdev1", 00:08:00.400 "uuid": "52a7865a-94cb-4b8a-9289-9dbed97edcde", 00:08:00.400 "is_configured": true, 00:08:00.400 "data_offset": 0, 00:08:00.400 "data_size": 65536 00:08:00.400 }, 00:08:00.400 { 00:08:00.400 "name": "BaseBdev2", 00:08:00.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.400 "is_configured": false, 00:08:00.400 "data_offset": 0, 00:08:00.400 "data_size": 0 00:08:00.400 } 00:08:00.400 ] 00:08:00.400 }' 00:08:00.400 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.400 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.969 [2024-10-13 02:22:19.442088] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:00.969 [2024-10-13 02:22:19.442143] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:00.969 [2024-10-13 02:22:19.442166] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:00.969 [2024-10-13 02:22:19.442470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:00.969 [2024-10-13 02:22:19.442629] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:00.969 [2024-10-13 02:22:19.442647] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:00.969 [2024-10-13 02:22:19.442913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.969 BaseBdev2 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.969 [ 00:08:00.969 { 00:08:00.969 "name": "BaseBdev2", 00:08:00.969 "aliases": [ 00:08:00.969 "22ae53d3-ebdc-4662-9c18-7515222ac4c0" 00:08:00.969 ], 00:08:00.969 "product_name": "Malloc disk", 00:08:00.969 "block_size": 512, 00:08:00.969 "num_blocks": 65536, 00:08:00.969 "uuid": "22ae53d3-ebdc-4662-9c18-7515222ac4c0", 00:08:00.969 "assigned_rate_limits": { 00:08:00.969 "rw_ios_per_sec": 0, 00:08:00.969 "rw_mbytes_per_sec": 0, 00:08:00.969 "r_mbytes_per_sec": 0, 00:08:00.969 "w_mbytes_per_sec": 0 00:08:00.969 }, 00:08:00.969 "claimed": true, 00:08:00.969 "claim_type": "exclusive_write", 00:08:00.969 "zoned": false, 00:08:00.969 "supported_io_types": { 00:08:00.969 "read": true, 00:08:00.969 "write": true, 00:08:00.969 "unmap": true, 00:08:00.969 "flush": true, 00:08:00.969 "reset": true, 00:08:00.969 "nvme_admin": false, 00:08:00.969 "nvme_io": false, 00:08:00.969 "nvme_io_md": false, 00:08:00.969 "write_zeroes": true, 00:08:00.969 "zcopy": true, 00:08:00.969 "get_zone_info": false, 00:08:00.969 "zone_management": false, 00:08:00.969 "zone_append": false, 00:08:00.969 "compare": false, 00:08:00.969 "compare_and_write": false, 00:08:00.969 "abort": true, 00:08:00.969 "seek_hole": false, 00:08:00.969 "seek_data": false, 00:08:00.969 "copy": true, 00:08:00.969 "nvme_iov_md": false 00:08:00.969 }, 00:08:00.969 "memory_domains": [ 00:08:00.969 { 00:08:00.969 "dma_device_id": "system", 00:08:00.969 "dma_device_type": 1 00:08:00.969 }, 00:08:00.969 { 00:08:00.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.969 "dma_device_type": 2 00:08:00.969 } 00:08:00.969 ], 00:08:00.969 "driver_specific": {} 00:08:00.969 } 00:08:00.969 ] 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.969 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.969 "name": "Existed_Raid", 00:08:00.969 "uuid": "dbc76007-d3d0-4a11-8da3-61dd877489c0", 00:08:00.969 "strip_size_kb": 64, 00:08:00.969 "state": "online", 00:08:00.969 "raid_level": "raid0", 00:08:00.970 "superblock": false, 00:08:00.970 "num_base_bdevs": 2, 00:08:00.970 "num_base_bdevs_discovered": 2, 00:08:00.970 "num_base_bdevs_operational": 2, 00:08:00.970 "base_bdevs_list": [ 00:08:00.970 { 00:08:00.970 "name": "BaseBdev1", 00:08:00.970 "uuid": "52a7865a-94cb-4b8a-9289-9dbed97edcde", 00:08:00.970 "is_configured": true, 00:08:00.970 "data_offset": 0, 00:08:00.970 "data_size": 65536 00:08:00.970 }, 00:08:00.970 { 00:08:00.970 "name": "BaseBdev2", 00:08:00.970 "uuid": "22ae53d3-ebdc-4662-9c18-7515222ac4c0", 00:08:00.970 "is_configured": true, 00:08:00.970 "data_offset": 0, 00:08:00.970 "data_size": 65536 00:08:00.970 } 00:08:00.970 ] 00:08:00.970 }' 00:08:00.970 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.970 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.539 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:01.539 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:01.539 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:01.539 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:01.539 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:01.539 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:01.539 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:01.539 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:01.539 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.539 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.539 [2024-10-13 02:22:19.925588] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:01.539 02:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.539 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:01.539 "name": "Existed_Raid", 00:08:01.539 "aliases": [ 00:08:01.539 "dbc76007-d3d0-4a11-8da3-61dd877489c0" 00:08:01.539 ], 00:08:01.539 "product_name": "Raid Volume", 00:08:01.539 "block_size": 512, 00:08:01.539 "num_blocks": 131072, 00:08:01.539 "uuid": "dbc76007-d3d0-4a11-8da3-61dd877489c0", 00:08:01.539 "assigned_rate_limits": { 00:08:01.539 "rw_ios_per_sec": 0, 00:08:01.539 "rw_mbytes_per_sec": 0, 00:08:01.539 "r_mbytes_per_sec": 0, 00:08:01.539 "w_mbytes_per_sec": 0 00:08:01.539 }, 00:08:01.539 "claimed": false, 00:08:01.539 "zoned": false, 00:08:01.539 "supported_io_types": { 00:08:01.539 "read": true, 00:08:01.539 "write": true, 00:08:01.539 "unmap": true, 00:08:01.539 "flush": true, 00:08:01.539 "reset": true, 00:08:01.539 "nvme_admin": false, 00:08:01.539 "nvme_io": false, 00:08:01.539 "nvme_io_md": false, 00:08:01.539 "write_zeroes": true, 00:08:01.539 "zcopy": false, 00:08:01.539 "get_zone_info": false, 00:08:01.539 "zone_management": false, 00:08:01.539 "zone_append": false, 00:08:01.539 "compare": false, 00:08:01.539 "compare_and_write": false, 00:08:01.539 "abort": false, 00:08:01.539 "seek_hole": false, 00:08:01.539 "seek_data": false, 00:08:01.539 "copy": false, 00:08:01.539 "nvme_iov_md": false 00:08:01.539 }, 00:08:01.539 "memory_domains": [ 00:08:01.539 { 00:08:01.539 "dma_device_id": "system", 00:08:01.539 "dma_device_type": 1 00:08:01.539 }, 00:08:01.539 { 00:08:01.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.539 "dma_device_type": 2 00:08:01.539 }, 00:08:01.539 { 00:08:01.539 "dma_device_id": "system", 00:08:01.539 "dma_device_type": 1 00:08:01.539 }, 00:08:01.539 { 00:08:01.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.539 "dma_device_type": 2 00:08:01.539 } 00:08:01.539 ], 00:08:01.539 "driver_specific": { 00:08:01.539 "raid": { 00:08:01.539 "uuid": "dbc76007-d3d0-4a11-8da3-61dd877489c0", 00:08:01.539 "strip_size_kb": 64, 00:08:01.539 "state": "online", 00:08:01.539 "raid_level": "raid0", 00:08:01.539 "superblock": false, 00:08:01.539 "num_base_bdevs": 2, 00:08:01.540 "num_base_bdevs_discovered": 2, 00:08:01.540 "num_base_bdevs_operational": 2, 00:08:01.540 "base_bdevs_list": [ 00:08:01.540 { 00:08:01.540 "name": "BaseBdev1", 00:08:01.540 "uuid": "52a7865a-94cb-4b8a-9289-9dbed97edcde", 00:08:01.540 "is_configured": true, 00:08:01.540 "data_offset": 0, 00:08:01.540 "data_size": 65536 00:08:01.540 }, 00:08:01.540 { 00:08:01.540 "name": "BaseBdev2", 00:08:01.540 "uuid": "22ae53d3-ebdc-4662-9c18-7515222ac4c0", 00:08:01.540 "is_configured": true, 00:08:01.540 "data_offset": 0, 00:08:01.540 "data_size": 65536 00:08:01.540 } 00:08:01.540 ] 00:08:01.540 } 00:08:01.540 } 00:08:01.540 }' 00:08:01.540 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:01.540 02:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:01.540 BaseBdev2' 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.540 [2024-10-13 02:22:20.129020] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:01.540 [2024-10-13 02:22:20.129052] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:01.540 [2024-10-13 02:22:20.129103] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.540 "name": "Existed_Raid", 00:08:01.540 "uuid": "dbc76007-d3d0-4a11-8da3-61dd877489c0", 00:08:01.540 "strip_size_kb": 64, 00:08:01.540 "state": "offline", 00:08:01.540 "raid_level": "raid0", 00:08:01.540 "superblock": false, 00:08:01.540 "num_base_bdevs": 2, 00:08:01.540 "num_base_bdevs_discovered": 1, 00:08:01.540 "num_base_bdevs_operational": 1, 00:08:01.540 "base_bdevs_list": [ 00:08:01.540 { 00:08:01.540 "name": null, 00:08:01.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.540 "is_configured": false, 00:08:01.540 "data_offset": 0, 00:08:01.540 "data_size": 65536 00:08:01.540 }, 00:08:01.540 { 00:08:01.540 "name": "BaseBdev2", 00:08:01.540 "uuid": "22ae53d3-ebdc-4662-9c18-7515222ac4c0", 00:08:01.540 "is_configured": true, 00:08:01.540 "data_offset": 0, 00:08:01.540 "data_size": 65536 00:08:01.540 } 00:08:01.540 ] 00:08:01.540 }' 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.540 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 [2024-10-13 02:22:20.659350] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:02.110 [2024-10-13 02:22:20.659457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71988 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71988 ']' 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71988 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71988 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71988' 00:08:02.110 killing process with pid 71988 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71988 00:08:02.110 [2024-10-13 02:22:20.757590] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.110 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71988 00:08:02.110 [2024-10-13 02:22:20.758653] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.370 02:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:02.370 00:08:02.370 real 0m3.904s 00:08:02.370 user 0m6.123s 00:08:02.370 sys 0m0.799s 00:08:02.370 ************************************ 00:08:02.370 END TEST raid_state_function_test 00:08:02.370 ************************************ 00:08:02.370 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.370 02:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.370 02:22:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:02.370 02:22:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:02.370 02:22:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.370 02:22:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:02.631 ************************************ 00:08:02.631 START TEST raid_state_function_test_sb 00:08:02.631 ************************************ 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72225 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72225' 00:08:02.631 Process raid pid: 72225 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72225 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72225 ']' 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.631 02:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.631 [2024-10-13 02:22:21.150630] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:02.631 [2024-10-13 02:22:21.150746] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.631 [2024-10-13 02:22:21.296798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.890 [2024-10-13 02:22:21.348393] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.890 [2024-10-13 02:22:21.390632] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.890 [2024-10-13 02:22:21.390669] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.477 [2024-10-13 02:22:22.044066] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:03.477 [2024-10-13 02:22:22.044126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:03.477 [2024-10-13 02:22:22.044138] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.477 [2024-10-13 02:22:22.044150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.477 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.478 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.478 "name": "Existed_Raid", 00:08:03.478 "uuid": "b2991b09-dcc9-49c6-abe0-fac87b84eb6a", 00:08:03.478 "strip_size_kb": 64, 00:08:03.478 "state": "configuring", 00:08:03.478 "raid_level": "raid0", 00:08:03.478 "superblock": true, 00:08:03.478 "num_base_bdevs": 2, 00:08:03.478 "num_base_bdevs_discovered": 0, 00:08:03.478 "num_base_bdevs_operational": 2, 00:08:03.478 "base_bdevs_list": [ 00:08:03.478 { 00:08:03.478 "name": "BaseBdev1", 00:08:03.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.478 "is_configured": false, 00:08:03.478 "data_offset": 0, 00:08:03.478 "data_size": 0 00:08:03.478 }, 00:08:03.478 { 00:08:03.478 "name": "BaseBdev2", 00:08:03.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.478 "is_configured": false, 00:08:03.478 "data_offset": 0, 00:08:03.478 "data_size": 0 00:08:03.478 } 00:08:03.478 ] 00:08:03.478 }' 00:08:03.478 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.478 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.048 [2024-10-13 02:22:22.511153] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.048 [2024-10-13 02:22:22.511308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.048 [2024-10-13 02:22:22.519147] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:04.048 [2024-10-13 02:22:22.519260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:04.048 [2024-10-13 02:22:22.519306] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.048 [2024-10-13 02:22:22.519330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.048 [2024-10-13 02:22:22.536071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.048 BaseBdev1 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.048 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.048 [ 00:08:04.048 { 00:08:04.048 "name": "BaseBdev1", 00:08:04.048 "aliases": [ 00:08:04.048 "1e516d73-b3e9-4372-b10e-f9a24845da80" 00:08:04.048 ], 00:08:04.048 "product_name": "Malloc disk", 00:08:04.048 "block_size": 512, 00:08:04.048 "num_blocks": 65536, 00:08:04.048 "uuid": "1e516d73-b3e9-4372-b10e-f9a24845da80", 00:08:04.048 "assigned_rate_limits": { 00:08:04.048 "rw_ios_per_sec": 0, 00:08:04.048 "rw_mbytes_per_sec": 0, 00:08:04.048 "r_mbytes_per_sec": 0, 00:08:04.048 "w_mbytes_per_sec": 0 00:08:04.048 }, 00:08:04.048 "claimed": true, 00:08:04.048 "claim_type": "exclusive_write", 00:08:04.048 "zoned": false, 00:08:04.048 "supported_io_types": { 00:08:04.048 "read": true, 00:08:04.048 "write": true, 00:08:04.048 "unmap": true, 00:08:04.048 "flush": true, 00:08:04.048 "reset": true, 00:08:04.048 "nvme_admin": false, 00:08:04.048 "nvme_io": false, 00:08:04.048 "nvme_io_md": false, 00:08:04.048 "write_zeroes": true, 00:08:04.048 "zcopy": true, 00:08:04.048 "get_zone_info": false, 00:08:04.048 "zone_management": false, 00:08:04.048 "zone_append": false, 00:08:04.048 "compare": false, 00:08:04.048 "compare_and_write": false, 00:08:04.048 "abort": true, 00:08:04.048 "seek_hole": false, 00:08:04.048 "seek_data": false, 00:08:04.048 "copy": true, 00:08:04.048 "nvme_iov_md": false 00:08:04.049 }, 00:08:04.049 "memory_domains": [ 00:08:04.049 { 00:08:04.049 "dma_device_id": "system", 00:08:04.049 "dma_device_type": 1 00:08:04.049 }, 00:08:04.049 { 00:08:04.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.049 "dma_device_type": 2 00:08:04.049 } 00:08:04.049 ], 00:08:04.049 "driver_specific": {} 00:08:04.049 } 00:08:04.049 ] 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.049 "name": "Existed_Raid", 00:08:04.049 "uuid": "fbde2e92-ae8b-4345-924c-071ce1ae0e5c", 00:08:04.049 "strip_size_kb": 64, 00:08:04.049 "state": "configuring", 00:08:04.049 "raid_level": "raid0", 00:08:04.049 "superblock": true, 00:08:04.049 "num_base_bdevs": 2, 00:08:04.049 "num_base_bdevs_discovered": 1, 00:08:04.049 "num_base_bdevs_operational": 2, 00:08:04.049 "base_bdevs_list": [ 00:08:04.049 { 00:08:04.049 "name": "BaseBdev1", 00:08:04.049 "uuid": "1e516d73-b3e9-4372-b10e-f9a24845da80", 00:08:04.049 "is_configured": true, 00:08:04.049 "data_offset": 2048, 00:08:04.049 "data_size": 63488 00:08:04.049 }, 00:08:04.049 { 00:08:04.049 "name": "BaseBdev2", 00:08:04.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.049 "is_configured": false, 00:08:04.049 "data_offset": 0, 00:08:04.049 "data_size": 0 00:08:04.049 } 00:08:04.049 ] 00:08:04.049 }' 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.049 02:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.647 [2024-10-13 02:22:23.035284] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.647 [2024-10-13 02:22:23.035423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.647 [2024-10-13 02:22:23.047333] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.647 [2024-10-13 02:22:23.049155] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.647 [2024-10-13 02:22:23.049248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.647 "name": "Existed_Raid", 00:08:04.647 "uuid": "ae950cec-bb80-44b4-93c3-628e1dfc5be6", 00:08:04.647 "strip_size_kb": 64, 00:08:04.647 "state": "configuring", 00:08:04.647 "raid_level": "raid0", 00:08:04.647 "superblock": true, 00:08:04.647 "num_base_bdevs": 2, 00:08:04.647 "num_base_bdevs_discovered": 1, 00:08:04.647 "num_base_bdevs_operational": 2, 00:08:04.647 "base_bdevs_list": [ 00:08:04.647 { 00:08:04.647 "name": "BaseBdev1", 00:08:04.647 "uuid": "1e516d73-b3e9-4372-b10e-f9a24845da80", 00:08:04.647 "is_configured": true, 00:08:04.647 "data_offset": 2048, 00:08:04.647 "data_size": 63488 00:08:04.647 }, 00:08:04.647 { 00:08:04.647 "name": "BaseBdev2", 00:08:04.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.647 "is_configured": false, 00:08:04.647 "data_offset": 0, 00:08:04.647 "data_size": 0 00:08:04.647 } 00:08:04.647 ] 00:08:04.647 }' 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.647 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.907 [2024-10-13 02:22:23.498263] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.907 [2024-10-13 02:22:23.498552] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:04.907 [2024-10-13 02:22:23.498591] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:04.907 [2024-10-13 02:22:23.498898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:04.907 [2024-10-13 02:22:23.499067] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:04.907 [2024-10-13 02:22:23.499111] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:04.907 [2024-10-13 02:22:23.499275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.907 BaseBdev2 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.907 [ 00:08:04.907 { 00:08:04.907 "name": "BaseBdev2", 00:08:04.907 "aliases": [ 00:08:04.907 "7f9cd2be-fe2b-4f72-a031-2e1a7d0dc4ff" 00:08:04.907 ], 00:08:04.907 "product_name": "Malloc disk", 00:08:04.907 "block_size": 512, 00:08:04.907 "num_blocks": 65536, 00:08:04.907 "uuid": "7f9cd2be-fe2b-4f72-a031-2e1a7d0dc4ff", 00:08:04.907 "assigned_rate_limits": { 00:08:04.907 "rw_ios_per_sec": 0, 00:08:04.907 "rw_mbytes_per_sec": 0, 00:08:04.907 "r_mbytes_per_sec": 0, 00:08:04.907 "w_mbytes_per_sec": 0 00:08:04.907 }, 00:08:04.907 "claimed": true, 00:08:04.907 "claim_type": "exclusive_write", 00:08:04.907 "zoned": false, 00:08:04.907 "supported_io_types": { 00:08:04.907 "read": true, 00:08:04.907 "write": true, 00:08:04.907 "unmap": true, 00:08:04.907 "flush": true, 00:08:04.907 "reset": true, 00:08:04.907 "nvme_admin": false, 00:08:04.907 "nvme_io": false, 00:08:04.907 "nvme_io_md": false, 00:08:04.907 "write_zeroes": true, 00:08:04.907 "zcopy": true, 00:08:04.907 "get_zone_info": false, 00:08:04.907 "zone_management": false, 00:08:04.907 "zone_append": false, 00:08:04.907 "compare": false, 00:08:04.907 "compare_and_write": false, 00:08:04.907 "abort": true, 00:08:04.907 "seek_hole": false, 00:08:04.907 "seek_data": false, 00:08:04.907 "copy": true, 00:08:04.907 "nvme_iov_md": false 00:08:04.907 }, 00:08:04.907 "memory_domains": [ 00:08:04.907 { 00:08:04.907 "dma_device_id": "system", 00:08:04.907 "dma_device_type": 1 00:08:04.907 }, 00:08:04.907 { 00:08:04.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.907 "dma_device_type": 2 00:08:04.907 } 00:08:04.907 ], 00:08:04.907 "driver_specific": {} 00:08:04.907 } 00:08:04.907 ] 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.907 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.168 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.168 "name": "Existed_Raid", 00:08:05.168 "uuid": "ae950cec-bb80-44b4-93c3-628e1dfc5be6", 00:08:05.168 "strip_size_kb": 64, 00:08:05.168 "state": "online", 00:08:05.168 "raid_level": "raid0", 00:08:05.168 "superblock": true, 00:08:05.168 "num_base_bdevs": 2, 00:08:05.168 "num_base_bdevs_discovered": 2, 00:08:05.168 "num_base_bdevs_operational": 2, 00:08:05.168 "base_bdevs_list": [ 00:08:05.168 { 00:08:05.168 "name": "BaseBdev1", 00:08:05.168 "uuid": "1e516d73-b3e9-4372-b10e-f9a24845da80", 00:08:05.168 "is_configured": true, 00:08:05.168 "data_offset": 2048, 00:08:05.168 "data_size": 63488 00:08:05.168 }, 00:08:05.168 { 00:08:05.168 "name": "BaseBdev2", 00:08:05.168 "uuid": "7f9cd2be-fe2b-4f72-a031-2e1a7d0dc4ff", 00:08:05.168 "is_configured": true, 00:08:05.168 "data_offset": 2048, 00:08:05.168 "data_size": 63488 00:08:05.168 } 00:08:05.168 ] 00:08:05.168 }' 00:08:05.168 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.168 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.428 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:05.428 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:05.428 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:05.428 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:05.428 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:05.428 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:05.428 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:05.428 02:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:05.428 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.428 02:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.428 [2024-10-13 02:22:24.005812] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.428 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.428 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:05.428 "name": "Existed_Raid", 00:08:05.428 "aliases": [ 00:08:05.428 "ae950cec-bb80-44b4-93c3-628e1dfc5be6" 00:08:05.428 ], 00:08:05.428 "product_name": "Raid Volume", 00:08:05.428 "block_size": 512, 00:08:05.428 "num_blocks": 126976, 00:08:05.428 "uuid": "ae950cec-bb80-44b4-93c3-628e1dfc5be6", 00:08:05.428 "assigned_rate_limits": { 00:08:05.428 "rw_ios_per_sec": 0, 00:08:05.428 "rw_mbytes_per_sec": 0, 00:08:05.428 "r_mbytes_per_sec": 0, 00:08:05.428 "w_mbytes_per_sec": 0 00:08:05.428 }, 00:08:05.428 "claimed": false, 00:08:05.428 "zoned": false, 00:08:05.428 "supported_io_types": { 00:08:05.428 "read": true, 00:08:05.428 "write": true, 00:08:05.428 "unmap": true, 00:08:05.428 "flush": true, 00:08:05.428 "reset": true, 00:08:05.428 "nvme_admin": false, 00:08:05.428 "nvme_io": false, 00:08:05.428 "nvme_io_md": false, 00:08:05.428 "write_zeroes": true, 00:08:05.428 "zcopy": false, 00:08:05.428 "get_zone_info": false, 00:08:05.428 "zone_management": false, 00:08:05.428 "zone_append": false, 00:08:05.428 "compare": false, 00:08:05.428 "compare_and_write": false, 00:08:05.428 "abort": false, 00:08:05.429 "seek_hole": false, 00:08:05.429 "seek_data": false, 00:08:05.429 "copy": false, 00:08:05.429 "nvme_iov_md": false 00:08:05.429 }, 00:08:05.429 "memory_domains": [ 00:08:05.429 { 00:08:05.429 "dma_device_id": "system", 00:08:05.429 "dma_device_type": 1 00:08:05.429 }, 00:08:05.429 { 00:08:05.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.429 "dma_device_type": 2 00:08:05.429 }, 00:08:05.429 { 00:08:05.429 "dma_device_id": "system", 00:08:05.429 "dma_device_type": 1 00:08:05.429 }, 00:08:05.429 { 00:08:05.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.429 "dma_device_type": 2 00:08:05.429 } 00:08:05.429 ], 00:08:05.429 "driver_specific": { 00:08:05.429 "raid": { 00:08:05.429 "uuid": "ae950cec-bb80-44b4-93c3-628e1dfc5be6", 00:08:05.429 "strip_size_kb": 64, 00:08:05.429 "state": "online", 00:08:05.429 "raid_level": "raid0", 00:08:05.429 "superblock": true, 00:08:05.429 "num_base_bdevs": 2, 00:08:05.429 "num_base_bdevs_discovered": 2, 00:08:05.429 "num_base_bdevs_operational": 2, 00:08:05.429 "base_bdevs_list": [ 00:08:05.429 { 00:08:05.429 "name": "BaseBdev1", 00:08:05.429 "uuid": "1e516d73-b3e9-4372-b10e-f9a24845da80", 00:08:05.429 "is_configured": true, 00:08:05.429 "data_offset": 2048, 00:08:05.429 "data_size": 63488 00:08:05.429 }, 00:08:05.429 { 00:08:05.429 "name": "BaseBdev2", 00:08:05.429 "uuid": "7f9cd2be-fe2b-4f72-a031-2e1a7d0dc4ff", 00:08:05.429 "is_configured": true, 00:08:05.429 "data_offset": 2048, 00:08:05.429 "data_size": 63488 00:08:05.429 } 00:08:05.429 ] 00:08:05.429 } 00:08:05.429 } 00:08:05.429 }' 00:08:05.429 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:05.429 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:05.429 BaseBdev2' 00:08:05.429 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.689 [2024-10-13 02:22:24.257185] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:05.689 [2024-10-13 02:22:24.257234] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.689 [2024-10-13 02:22:24.257295] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.689 "name": "Existed_Raid", 00:08:05.689 "uuid": "ae950cec-bb80-44b4-93c3-628e1dfc5be6", 00:08:05.689 "strip_size_kb": 64, 00:08:05.689 "state": "offline", 00:08:05.689 "raid_level": "raid0", 00:08:05.689 "superblock": true, 00:08:05.689 "num_base_bdevs": 2, 00:08:05.689 "num_base_bdevs_discovered": 1, 00:08:05.689 "num_base_bdevs_operational": 1, 00:08:05.689 "base_bdevs_list": [ 00:08:05.689 { 00:08:05.689 "name": null, 00:08:05.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.689 "is_configured": false, 00:08:05.689 "data_offset": 0, 00:08:05.689 "data_size": 63488 00:08:05.689 }, 00:08:05.689 { 00:08:05.689 "name": "BaseBdev2", 00:08:05.689 "uuid": "7f9cd2be-fe2b-4f72-a031-2e1a7d0dc4ff", 00:08:05.689 "is_configured": true, 00:08:05.689 "data_offset": 2048, 00:08:05.689 "data_size": 63488 00:08:05.689 } 00:08:05.689 ] 00:08:05.689 }' 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.689 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.258 [2024-10-13 02:22:24.771967] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:06.258 [2024-10-13 02:22:24.772081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72225 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72225 ']' 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72225 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72225 00:08:06.258 killing process with pid 72225 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:06.258 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:06.259 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72225' 00:08:06.259 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72225 00:08:06.259 [2024-10-13 02:22:24.869107] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:06.259 02:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72225 00:08:06.259 [2024-10-13 02:22:24.870101] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.518 02:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:06.518 00:08:06.518 real 0m4.053s 00:08:06.518 user 0m6.412s 00:08:06.518 sys 0m0.795s 00:08:06.518 02:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.518 02:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.518 ************************************ 00:08:06.518 END TEST raid_state_function_test_sb 00:08:06.518 ************************************ 00:08:06.518 02:22:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:06.518 02:22:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:06.518 02:22:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.518 02:22:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.518 ************************************ 00:08:06.518 START TEST raid_superblock_test 00:08:06.518 ************************************ 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72466 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72466 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72466 ']' 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:06.518 02:22:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.778 [2024-10-13 02:22:25.272237] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:06.778 [2024-10-13 02:22:25.272438] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72466 ] 00:08:06.778 [2024-10-13 02:22:25.416004] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.778 [2024-10-13 02:22:25.460430] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.038 [2024-10-13 02:22:25.502640] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.038 [2024-10-13 02:22:25.502771] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.608 malloc1 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.608 [2024-10-13 02:22:26.125105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:07.608 [2024-10-13 02:22:26.125171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.608 [2024-10-13 02:22:26.125191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:07.608 [2024-10-13 02:22:26.125205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.608 [2024-10-13 02:22:26.127336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.608 [2024-10-13 02:22:26.127439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:07.608 pt1 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.608 malloc2 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.608 [2024-10-13 02:22:26.163840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:07.608 [2024-10-13 02:22:26.163999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.608 [2024-10-13 02:22:26.164040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:07.608 [2024-10-13 02:22:26.164081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.608 [2024-10-13 02:22:26.166577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.608 [2024-10-13 02:22:26.166665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:07.608 pt2 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.608 [2024-10-13 02:22:26.175853] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:07.608 [2024-10-13 02:22:26.177746] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:07.608 [2024-10-13 02:22:26.177934] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:07.608 [2024-10-13 02:22:26.177984] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:07.608 [2024-10-13 02:22:26.178254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:07.608 [2024-10-13 02:22:26.178432] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:07.608 [2024-10-13 02:22:26.178473] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:07.608 [2024-10-13 02:22:26.178624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.608 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.608 "name": "raid_bdev1", 00:08:07.608 "uuid": "066e2cd3-e527-4be9-8070-11915dfa62f4", 00:08:07.608 "strip_size_kb": 64, 00:08:07.608 "state": "online", 00:08:07.608 "raid_level": "raid0", 00:08:07.608 "superblock": true, 00:08:07.608 "num_base_bdevs": 2, 00:08:07.608 "num_base_bdevs_discovered": 2, 00:08:07.608 "num_base_bdevs_operational": 2, 00:08:07.608 "base_bdevs_list": [ 00:08:07.608 { 00:08:07.608 "name": "pt1", 00:08:07.608 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:07.608 "is_configured": true, 00:08:07.608 "data_offset": 2048, 00:08:07.608 "data_size": 63488 00:08:07.608 }, 00:08:07.608 { 00:08:07.608 "name": "pt2", 00:08:07.608 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:07.608 "is_configured": true, 00:08:07.608 "data_offset": 2048, 00:08:07.609 "data_size": 63488 00:08:07.609 } 00:08:07.609 ] 00:08:07.609 }' 00:08:07.609 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.609 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:08.177 [2024-10-13 02:22:26.671454] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:08.177 "name": "raid_bdev1", 00:08:08.177 "aliases": [ 00:08:08.177 "066e2cd3-e527-4be9-8070-11915dfa62f4" 00:08:08.177 ], 00:08:08.177 "product_name": "Raid Volume", 00:08:08.177 "block_size": 512, 00:08:08.177 "num_blocks": 126976, 00:08:08.177 "uuid": "066e2cd3-e527-4be9-8070-11915dfa62f4", 00:08:08.177 "assigned_rate_limits": { 00:08:08.177 "rw_ios_per_sec": 0, 00:08:08.177 "rw_mbytes_per_sec": 0, 00:08:08.177 "r_mbytes_per_sec": 0, 00:08:08.177 "w_mbytes_per_sec": 0 00:08:08.177 }, 00:08:08.177 "claimed": false, 00:08:08.177 "zoned": false, 00:08:08.177 "supported_io_types": { 00:08:08.177 "read": true, 00:08:08.177 "write": true, 00:08:08.177 "unmap": true, 00:08:08.177 "flush": true, 00:08:08.177 "reset": true, 00:08:08.177 "nvme_admin": false, 00:08:08.177 "nvme_io": false, 00:08:08.177 "nvme_io_md": false, 00:08:08.177 "write_zeroes": true, 00:08:08.177 "zcopy": false, 00:08:08.177 "get_zone_info": false, 00:08:08.177 "zone_management": false, 00:08:08.177 "zone_append": false, 00:08:08.177 "compare": false, 00:08:08.177 "compare_and_write": false, 00:08:08.177 "abort": false, 00:08:08.177 "seek_hole": false, 00:08:08.177 "seek_data": false, 00:08:08.177 "copy": false, 00:08:08.177 "nvme_iov_md": false 00:08:08.177 }, 00:08:08.177 "memory_domains": [ 00:08:08.177 { 00:08:08.177 "dma_device_id": "system", 00:08:08.177 "dma_device_type": 1 00:08:08.177 }, 00:08:08.177 { 00:08:08.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.177 "dma_device_type": 2 00:08:08.177 }, 00:08:08.177 { 00:08:08.177 "dma_device_id": "system", 00:08:08.177 "dma_device_type": 1 00:08:08.177 }, 00:08:08.177 { 00:08:08.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.177 "dma_device_type": 2 00:08:08.177 } 00:08:08.177 ], 00:08:08.177 "driver_specific": { 00:08:08.177 "raid": { 00:08:08.177 "uuid": "066e2cd3-e527-4be9-8070-11915dfa62f4", 00:08:08.177 "strip_size_kb": 64, 00:08:08.177 "state": "online", 00:08:08.177 "raid_level": "raid0", 00:08:08.177 "superblock": true, 00:08:08.177 "num_base_bdevs": 2, 00:08:08.177 "num_base_bdevs_discovered": 2, 00:08:08.177 "num_base_bdevs_operational": 2, 00:08:08.177 "base_bdevs_list": [ 00:08:08.177 { 00:08:08.177 "name": "pt1", 00:08:08.177 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:08.177 "is_configured": true, 00:08:08.177 "data_offset": 2048, 00:08:08.177 "data_size": 63488 00:08:08.177 }, 00:08:08.177 { 00:08:08.177 "name": "pt2", 00:08:08.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.177 "is_configured": true, 00:08:08.177 "data_offset": 2048, 00:08:08.177 "data_size": 63488 00:08:08.177 } 00:08:08.177 ] 00:08:08.177 } 00:08:08.177 } 00:08:08.177 }' 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:08.177 pt2' 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.177 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.437 [2024-10-13 02:22:26.883019] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=066e2cd3-e527-4be9-8070-11915dfa62f4 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 066e2cd3-e527-4be9-8070-11915dfa62f4 ']' 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.437 [2024-10-13 02:22:26.926649] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.437 [2024-10-13 02:22:26.926680] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.437 [2024-10-13 02:22:26.926764] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.437 [2024-10-13 02:22:26.926814] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.437 [2024-10-13 02:22:26.926824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.437 02:22:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.437 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.437 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:08.437 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:08.437 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:08.437 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:08.437 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:08.437 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.437 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:08.437 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.437 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:08.437 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.437 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.437 [2024-10-13 02:22:27.054432] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:08.437 [2024-10-13 02:22:27.056330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:08.438 [2024-10-13 02:22:27.056435] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:08.438 [2024-10-13 02:22:27.056515] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:08.438 [2024-10-13 02:22:27.056554] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.438 [2024-10-13 02:22:27.056576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:08.438 request: 00:08:08.438 { 00:08:08.438 "name": "raid_bdev1", 00:08:08.438 "raid_level": "raid0", 00:08:08.438 "base_bdevs": [ 00:08:08.438 "malloc1", 00:08:08.438 "malloc2" 00:08:08.438 ], 00:08:08.438 "strip_size_kb": 64, 00:08:08.438 "superblock": false, 00:08:08.438 "method": "bdev_raid_create", 00:08:08.438 "req_id": 1 00:08:08.438 } 00:08:08.438 Got JSON-RPC error response 00:08:08.438 response: 00:08:08.438 { 00:08:08.438 "code": -17, 00:08:08.438 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:08.438 } 00:08:08.438 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:08.438 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:08.438 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:08.438 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:08.438 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:08.438 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.438 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:08.438 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.438 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.438 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.438 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:08.438 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:08.438 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:08.438 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.438 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.438 [2024-10-13 02:22:27.118276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:08.438 [2024-10-13 02:22:27.118373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.438 [2024-10-13 02:22:27.118411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:08.438 [2024-10-13 02:22:27.118438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.698 [2024-10-13 02:22:27.120548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.698 [2024-10-13 02:22:27.120617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:08.698 [2024-10-13 02:22:27.120704] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:08.698 [2024-10-13 02:22:27.120758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:08.698 pt1 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.698 "name": "raid_bdev1", 00:08:08.698 "uuid": "066e2cd3-e527-4be9-8070-11915dfa62f4", 00:08:08.698 "strip_size_kb": 64, 00:08:08.698 "state": "configuring", 00:08:08.698 "raid_level": "raid0", 00:08:08.698 "superblock": true, 00:08:08.698 "num_base_bdevs": 2, 00:08:08.698 "num_base_bdevs_discovered": 1, 00:08:08.698 "num_base_bdevs_operational": 2, 00:08:08.698 "base_bdevs_list": [ 00:08:08.698 { 00:08:08.698 "name": "pt1", 00:08:08.698 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:08.698 "is_configured": true, 00:08:08.698 "data_offset": 2048, 00:08:08.698 "data_size": 63488 00:08:08.698 }, 00:08:08.698 { 00:08:08.698 "name": null, 00:08:08.698 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.698 "is_configured": false, 00:08:08.698 "data_offset": 2048, 00:08:08.698 "data_size": 63488 00:08:08.698 } 00:08:08.698 ] 00:08:08.698 }' 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.698 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.980 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:08.980 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:08.980 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:08.980 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:08.980 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.980 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.980 [2024-10-13 02:22:27.565551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:08.980 [2024-10-13 02:22:27.565686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.980 [2024-10-13 02:22:27.565725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:08.980 [2024-10-13 02:22:27.565752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.980 [2024-10-13 02:22:27.566191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.980 [2024-10-13 02:22:27.566245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:08.980 [2024-10-13 02:22:27.566344] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:08.980 [2024-10-13 02:22:27.566392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:08.980 [2024-10-13 02:22:27.566498] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:08.980 [2024-10-13 02:22:27.566531] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:08.980 [2024-10-13 02:22:27.566785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:08.981 [2024-10-13 02:22:27.566943] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:08.981 [2024-10-13 02:22:27.566961] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:08.981 [2024-10-13 02:22:27.567063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.981 pt2 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.981 "name": "raid_bdev1", 00:08:08.981 "uuid": "066e2cd3-e527-4be9-8070-11915dfa62f4", 00:08:08.981 "strip_size_kb": 64, 00:08:08.981 "state": "online", 00:08:08.981 "raid_level": "raid0", 00:08:08.981 "superblock": true, 00:08:08.981 "num_base_bdevs": 2, 00:08:08.981 "num_base_bdevs_discovered": 2, 00:08:08.981 "num_base_bdevs_operational": 2, 00:08:08.981 "base_bdevs_list": [ 00:08:08.981 { 00:08:08.981 "name": "pt1", 00:08:08.981 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:08.981 "is_configured": true, 00:08:08.981 "data_offset": 2048, 00:08:08.981 "data_size": 63488 00:08:08.981 }, 00:08:08.981 { 00:08:08.981 "name": "pt2", 00:08:08.981 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.981 "is_configured": true, 00:08:08.981 "data_offset": 2048, 00:08:08.981 "data_size": 63488 00:08:08.981 } 00:08:08.981 ] 00:08:08.981 }' 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.981 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.551 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:09.551 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:09.551 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.551 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.551 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.551 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.551 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:09.551 02:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.551 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.551 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.551 [2024-10-13 02:22:27.977120] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.551 02:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.551 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.551 "name": "raid_bdev1", 00:08:09.551 "aliases": [ 00:08:09.551 "066e2cd3-e527-4be9-8070-11915dfa62f4" 00:08:09.551 ], 00:08:09.551 "product_name": "Raid Volume", 00:08:09.551 "block_size": 512, 00:08:09.551 "num_blocks": 126976, 00:08:09.551 "uuid": "066e2cd3-e527-4be9-8070-11915dfa62f4", 00:08:09.551 "assigned_rate_limits": { 00:08:09.551 "rw_ios_per_sec": 0, 00:08:09.551 "rw_mbytes_per_sec": 0, 00:08:09.551 "r_mbytes_per_sec": 0, 00:08:09.551 "w_mbytes_per_sec": 0 00:08:09.551 }, 00:08:09.551 "claimed": false, 00:08:09.551 "zoned": false, 00:08:09.551 "supported_io_types": { 00:08:09.551 "read": true, 00:08:09.551 "write": true, 00:08:09.551 "unmap": true, 00:08:09.551 "flush": true, 00:08:09.551 "reset": true, 00:08:09.551 "nvme_admin": false, 00:08:09.551 "nvme_io": false, 00:08:09.551 "nvme_io_md": false, 00:08:09.551 "write_zeroes": true, 00:08:09.551 "zcopy": false, 00:08:09.551 "get_zone_info": false, 00:08:09.551 "zone_management": false, 00:08:09.551 "zone_append": false, 00:08:09.551 "compare": false, 00:08:09.551 "compare_and_write": false, 00:08:09.551 "abort": false, 00:08:09.551 "seek_hole": false, 00:08:09.551 "seek_data": false, 00:08:09.551 "copy": false, 00:08:09.551 "nvme_iov_md": false 00:08:09.551 }, 00:08:09.551 "memory_domains": [ 00:08:09.551 { 00:08:09.551 "dma_device_id": "system", 00:08:09.551 "dma_device_type": 1 00:08:09.551 }, 00:08:09.551 { 00:08:09.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.551 "dma_device_type": 2 00:08:09.551 }, 00:08:09.551 { 00:08:09.551 "dma_device_id": "system", 00:08:09.551 "dma_device_type": 1 00:08:09.551 }, 00:08:09.551 { 00:08:09.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.551 "dma_device_type": 2 00:08:09.551 } 00:08:09.551 ], 00:08:09.551 "driver_specific": { 00:08:09.551 "raid": { 00:08:09.551 "uuid": "066e2cd3-e527-4be9-8070-11915dfa62f4", 00:08:09.551 "strip_size_kb": 64, 00:08:09.551 "state": "online", 00:08:09.551 "raid_level": "raid0", 00:08:09.551 "superblock": true, 00:08:09.551 "num_base_bdevs": 2, 00:08:09.551 "num_base_bdevs_discovered": 2, 00:08:09.551 "num_base_bdevs_operational": 2, 00:08:09.551 "base_bdevs_list": [ 00:08:09.551 { 00:08:09.551 "name": "pt1", 00:08:09.551 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:09.551 "is_configured": true, 00:08:09.551 "data_offset": 2048, 00:08:09.551 "data_size": 63488 00:08:09.551 }, 00:08:09.552 { 00:08:09.552 "name": "pt2", 00:08:09.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:09.552 "is_configured": true, 00:08:09.552 "data_offset": 2048, 00:08:09.552 "data_size": 63488 00:08:09.552 } 00:08:09.552 ] 00:08:09.552 } 00:08:09.552 } 00:08:09.552 }' 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:09.552 pt2' 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:09.552 [2024-10-13 02:22:28.192700] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.552 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.812 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 066e2cd3-e527-4be9-8070-11915dfa62f4 '!=' 066e2cd3-e527-4be9-8070-11915dfa62f4 ']' 00:08:09.812 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:09.812 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.812 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:09.812 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72466 00:08:09.812 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72466 ']' 00:08:09.812 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72466 00:08:09.812 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:09.812 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.812 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72466 00:08:09.812 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.812 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.812 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72466' 00:08:09.812 killing process with pid 72466 00:08:09.812 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72466 00:08:09.812 [2024-10-13 02:22:28.279671] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.812 [2024-10-13 02:22:28.279824] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.812 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72466 00:08:09.812 [2024-10-13 02:22:28.279914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.812 [2024-10-13 02:22:28.279928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:09.812 [2024-10-13 02:22:28.302715] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.071 02:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:10.071 00:08:10.071 real 0m3.359s 00:08:10.071 user 0m5.149s 00:08:10.071 sys 0m0.744s 00:08:10.071 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.071 ************************************ 00:08:10.071 END TEST raid_superblock_test 00:08:10.071 ************************************ 00:08:10.071 02:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.071 02:22:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:10.071 02:22:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:10.071 02:22:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.071 02:22:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.071 ************************************ 00:08:10.071 START TEST raid_read_error_test 00:08:10.071 ************************************ 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MfTRzzXOVJ 00:08:10.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72672 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72672 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72672 ']' 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.071 02:22:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:10.071 [2024-10-13 02:22:28.697990] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:10.071 [2024-10-13 02:22:28.698186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72672 ] 00:08:10.331 [2024-10-13 02:22:28.843056] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.331 [2024-10-13 02:22:28.893469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.331 [2024-10-13 02:22:28.937475] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.331 [2024-10-13 02:22:28.937596] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.901 BaseBdev1_malloc 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.901 true 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.901 [2024-10-13 02:22:29.572600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:10.901 [2024-10-13 02:22:29.572708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.901 [2024-10-13 02:22:29.572749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:10.901 [2024-10-13 02:22:29.572778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.901 [2024-10-13 02:22:29.574898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.901 [2024-10-13 02:22:29.574965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:10.901 BaseBdev1 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.901 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.161 BaseBdev2_malloc 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.161 true 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.161 [2024-10-13 02:22:29.619990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:11.161 [2024-10-13 02:22:29.620045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.161 [2024-10-13 02:22:29.620065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:11.161 [2024-10-13 02:22:29.620073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.161 [2024-10-13 02:22:29.622115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.161 [2024-10-13 02:22:29.622148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:11.161 BaseBdev2 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.161 [2024-10-13 02:22:29.632080] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.161 [2024-10-13 02:22:29.634021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.161 [2024-10-13 02:22:29.634209] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:11.161 [2024-10-13 02:22:29.634222] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:11.161 [2024-10-13 02:22:29.634480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:11.161 [2024-10-13 02:22:29.634611] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:11.161 [2024-10-13 02:22:29.634625] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:11.161 [2024-10-13 02:22:29.634759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.161 "name": "raid_bdev1", 00:08:11.161 "uuid": "076158ca-f3af-4d50-ae13-a9de80bd1157", 00:08:11.161 "strip_size_kb": 64, 00:08:11.161 "state": "online", 00:08:11.161 "raid_level": "raid0", 00:08:11.161 "superblock": true, 00:08:11.161 "num_base_bdevs": 2, 00:08:11.161 "num_base_bdevs_discovered": 2, 00:08:11.161 "num_base_bdevs_operational": 2, 00:08:11.161 "base_bdevs_list": [ 00:08:11.161 { 00:08:11.161 "name": "BaseBdev1", 00:08:11.161 "uuid": "2de309a8-ee90-5150-be46-82e21f956e1f", 00:08:11.161 "is_configured": true, 00:08:11.161 "data_offset": 2048, 00:08:11.161 "data_size": 63488 00:08:11.161 }, 00:08:11.161 { 00:08:11.161 "name": "BaseBdev2", 00:08:11.161 "uuid": "7e6ffc7d-08d3-5456-bbb0-125a260c9b7f", 00:08:11.161 "is_configured": true, 00:08:11.161 "data_offset": 2048, 00:08:11.161 "data_size": 63488 00:08:11.161 } 00:08:11.161 ] 00:08:11.161 }' 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.161 02:22:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.421 02:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:11.421 02:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:11.681 [2024-10-13 02:22:30.127796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.621 "name": "raid_bdev1", 00:08:12.621 "uuid": "076158ca-f3af-4d50-ae13-a9de80bd1157", 00:08:12.621 "strip_size_kb": 64, 00:08:12.621 "state": "online", 00:08:12.621 "raid_level": "raid0", 00:08:12.621 "superblock": true, 00:08:12.621 "num_base_bdevs": 2, 00:08:12.621 "num_base_bdevs_discovered": 2, 00:08:12.621 "num_base_bdevs_operational": 2, 00:08:12.621 "base_bdevs_list": [ 00:08:12.621 { 00:08:12.621 "name": "BaseBdev1", 00:08:12.621 "uuid": "2de309a8-ee90-5150-be46-82e21f956e1f", 00:08:12.621 "is_configured": true, 00:08:12.621 "data_offset": 2048, 00:08:12.621 "data_size": 63488 00:08:12.621 }, 00:08:12.621 { 00:08:12.621 "name": "BaseBdev2", 00:08:12.621 "uuid": "7e6ffc7d-08d3-5456-bbb0-125a260c9b7f", 00:08:12.621 "is_configured": true, 00:08:12.621 "data_offset": 2048, 00:08:12.621 "data_size": 63488 00:08:12.621 } 00:08:12.621 ] 00:08:12.621 }' 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.621 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.881 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:12.881 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.881 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.881 [2024-10-13 02:22:31.492111] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:12.881 [2024-10-13 02:22:31.492146] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.881 [2024-10-13 02:22:31.494766] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.881 [2024-10-13 02:22:31.494827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.881 [2024-10-13 02:22:31.494866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.881 [2024-10-13 02:22:31.494887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:12.881 { 00:08:12.881 "results": [ 00:08:12.881 { 00:08:12.881 "job": "raid_bdev1", 00:08:12.881 "core_mask": "0x1", 00:08:12.881 "workload": "randrw", 00:08:12.881 "percentage": 50, 00:08:12.881 "status": "finished", 00:08:12.881 "queue_depth": 1, 00:08:12.881 "io_size": 131072, 00:08:12.881 "runtime": 1.365047, 00:08:12.881 "iops": 16603.82389763869, 00:08:12.881 "mibps": 2075.477987204836, 00:08:12.881 "io_failed": 1, 00:08:12.881 "io_timeout": 0, 00:08:12.881 "avg_latency_us": 83.45584641521052, 00:08:12.881 "min_latency_us": 25.7117903930131, 00:08:12.881 "max_latency_us": 1917.4288209606987 00:08:12.881 } 00:08:12.881 ], 00:08:12.881 "core_count": 1 00:08:12.882 } 00:08:12.882 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.882 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72672 00:08:12.882 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72672 ']' 00:08:12.882 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72672 00:08:12.882 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:12.882 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.882 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72672 00:08:12.882 killing process with pid 72672 00:08:12.882 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.882 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.882 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72672' 00:08:12.882 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72672 00:08:12.882 [2024-10-13 02:22:31.545297] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:12.882 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72672 00:08:12.882 [2024-10-13 02:22:31.561034] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.141 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:13.141 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MfTRzzXOVJ 00:08:13.141 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:13.141 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:13.141 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:13.141 ************************************ 00:08:13.141 END TEST raid_read_error_test 00:08:13.141 ************************************ 00:08:13.141 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:13.141 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:13.141 02:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:13.141 00:08:13.141 real 0m3.202s 00:08:13.141 user 0m4.029s 00:08:13.141 sys 0m0.501s 00:08:13.141 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.141 02:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.402 02:22:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:13.402 02:22:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:13.402 02:22:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.402 02:22:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.402 ************************************ 00:08:13.402 START TEST raid_write_error_test 00:08:13.402 ************************************ 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.va2xcMGmUI 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72801 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72801 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 72801 ']' 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.402 02:22:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.402 [2024-10-13 02:22:31.979829] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:13.402 [2024-10-13 02:22:31.980084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72801 ] 00:08:13.662 [2024-10-13 02:22:32.109085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.662 [2024-10-13 02:22:32.160568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.662 [2024-10-13 02:22:32.203312] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.662 [2024-10-13 02:22:32.203350] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.232 BaseBdev1_malloc 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.232 true 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.232 [2024-10-13 02:22:32.861836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:14.232 [2024-10-13 02:22:32.861916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.232 [2024-10-13 02:22:32.861944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:14.232 [2024-10-13 02:22:32.861953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.232 [2024-10-13 02:22:32.864005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.232 [2024-10-13 02:22:32.864043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:14.232 BaseBdev1 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.232 BaseBdev2_malloc 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.232 true 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.232 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.232 [2024-10-13 02:22:32.911239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:14.232 [2024-10-13 02:22:32.911341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.232 [2024-10-13 02:22:32.911368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:14.232 [2024-10-13 02:22:32.911377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.493 [2024-10-13 02:22:32.913490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.493 [2024-10-13 02:22:32.913525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:14.493 BaseBdev2 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.493 [2024-10-13 02:22:32.923313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.493 [2024-10-13 02:22:32.925230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:14.493 [2024-10-13 02:22:32.925411] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:14.493 [2024-10-13 02:22:32.925425] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:14.493 [2024-10-13 02:22:32.925679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:14.493 [2024-10-13 02:22:32.925814] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:14.493 [2024-10-13 02:22:32.925826] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:14.493 [2024-10-13 02:22:32.925992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.493 "name": "raid_bdev1", 00:08:14.493 "uuid": "b98f9288-fe8a-49bd-bb40-2e775fa37dee", 00:08:14.493 "strip_size_kb": 64, 00:08:14.493 "state": "online", 00:08:14.493 "raid_level": "raid0", 00:08:14.493 "superblock": true, 00:08:14.493 "num_base_bdevs": 2, 00:08:14.493 "num_base_bdevs_discovered": 2, 00:08:14.493 "num_base_bdevs_operational": 2, 00:08:14.493 "base_bdevs_list": [ 00:08:14.493 { 00:08:14.493 "name": "BaseBdev1", 00:08:14.493 "uuid": "780d355d-64a7-5536-a577-63971057f129", 00:08:14.493 "is_configured": true, 00:08:14.493 "data_offset": 2048, 00:08:14.493 "data_size": 63488 00:08:14.493 }, 00:08:14.493 { 00:08:14.493 "name": "BaseBdev2", 00:08:14.493 "uuid": "0fee55ed-8e67-5eeb-be18-4d847ad4d5b8", 00:08:14.493 "is_configured": true, 00:08:14.493 "data_offset": 2048, 00:08:14.493 "data_size": 63488 00:08:14.493 } 00:08:14.493 ] 00:08:14.493 }' 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.493 02:22:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.754 02:22:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:14.754 02:22:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:15.014 [2024-10-13 02:22:33.439079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.953 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.954 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.954 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.954 "name": "raid_bdev1", 00:08:15.954 "uuid": "b98f9288-fe8a-49bd-bb40-2e775fa37dee", 00:08:15.954 "strip_size_kb": 64, 00:08:15.954 "state": "online", 00:08:15.954 "raid_level": "raid0", 00:08:15.954 "superblock": true, 00:08:15.954 "num_base_bdevs": 2, 00:08:15.954 "num_base_bdevs_discovered": 2, 00:08:15.954 "num_base_bdevs_operational": 2, 00:08:15.954 "base_bdevs_list": [ 00:08:15.954 { 00:08:15.954 "name": "BaseBdev1", 00:08:15.954 "uuid": "780d355d-64a7-5536-a577-63971057f129", 00:08:15.954 "is_configured": true, 00:08:15.954 "data_offset": 2048, 00:08:15.954 "data_size": 63488 00:08:15.954 }, 00:08:15.954 { 00:08:15.954 "name": "BaseBdev2", 00:08:15.954 "uuid": "0fee55ed-8e67-5eeb-be18-4d847ad4d5b8", 00:08:15.954 "is_configured": true, 00:08:15.954 "data_offset": 2048, 00:08:15.954 "data_size": 63488 00:08:15.954 } 00:08:15.954 ] 00:08:15.954 }' 00:08:15.954 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.954 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.213 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:16.213 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.213 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.213 [2024-10-13 02:22:34.778834] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:16.213 [2024-10-13 02:22:34.778948] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.213 [2024-10-13 02:22:34.781705] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.213 [2024-10-13 02:22:34.781794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.213 [2024-10-13 02:22:34.781853] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.213 [2024-10-13 02:22:34.781909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:16.213 { 00:08:16.213 "results": [ 00:08:16.213 { 00:08:16.213 "job": "raid_bdev1", 00:08:16.213 "core_mask": "0x1", 00:08:16.213 "workload": "randrw", 00:08:16.213 "percentage": 50, 00:08:16.213 "status": "finished", 00:08:16.213 "queue_depth": 1, 00:08:16.213 "io_size": 131072, 00:08:16.213 "runtime": 1.34056, 00:08:16.213 "iops": 16602.76302440771, 00:08:16.213 "mibps": 2075.345378050964, 00:08:16.213 "io_failed": 1, 00:08:16.213 "io_timeout": 0, 00:08:16.213 "avg_latency_us": 83.32570266674149, 00:08:16.213 "min_latency_us": 24.705676855895195, 00:08:16.213 "max_latency_us": 1488.1537117903931 00:08:16.213 } 00:08:16.213 ], 00:08:16.213 "core_count": 1 00:08:16.213 } 00:08:16.213 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.213 02:22:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72801 00:08:16.213 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 72801 ']' 00:08:16.213 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 72801 00:08:16.213 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:16.213 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.213 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72801 00:08:16.213 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.213 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.213 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72801' 00:08:16.213 killing process with pid 72801 00:08:16.213 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 72801 00:08:16.213 [2024-10-13 02:22:34.826611] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.213 02:22:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 72801 00:08:16.213 [2024-10-13 02:22:34.841749] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.473 02:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:16.473 02:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:16.473 02:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.va2xcMGmUI 00:08:16.473 02:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:16.473 02:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:16.473 ************************************ 00:08:16.473 END TEST raid_write_error_test 00:08:16.473 ************************************ 00:08:16.473 02:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:16.473 02:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:16.473 02:22:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:16.473 00:08:16.473 real 0m3.201s 00:08:16.473 user 0m4.055s 00:08:16.473 sys 0m0.510s 00:08:16.473 02:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.473 02:22:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.473 02:22:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:16.473 02:22:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:16.473 02:22:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:16.473 02:22:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.473 02:22:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.473 ************************************ 00:08:16.473 START TEST raid_state_function_test 00:08:16.473 ************************************ 00:08:16.473 02:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:08:16.473 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:16.473 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72928 00:08:16.733 Process raid pid: 72928 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72928' 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72928 00:08:16.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72928 ']' 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.733 02:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.733 [2024-10-13 02:22:35.246541] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:16.733 [2024-10-13 02:22:35.246757] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.733 [2024-10-13 02:22:35.391391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.993 [2024-10-13 02:22:35.438471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.993 [2024-10-13 02:22:35.482072] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.993 [2024-10-13 02:22:35.482194] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.572 [2024-10-13 02:22:36.091831] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:17.572 [2024-10-13 02:22:36.091956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:17.572 [2024-10-13 02:22:36.091992] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:17.572 [2024-10-13 02:22:36.092019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.572 "name": "Existed_Raid", 00:08:17.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.572 "strip_size_kb": 64, 00:08:17.572 "state": "configuring", 00:08:17.572 "raid_level": "concat", 00:08:17.572 "superblock": false, 00:08:17.572 "num_base_bdevs": 2, 00:08:17.572 "num_base_bdevs_discovered": 0, 00:08:17.572 "num_base_bdevs_operational": 2, 00:08:17.572 "base_bdevs_list": [ 00:08:17.572 { 00:08:17.572 "name": "BaseBdev1", 00:08:17.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.572 "is_configured": false, 00:08:17.572 "data_offset": 0, 00:08:17.572 "data_size": 0 00:08:17.572 }, 00:08:17.572 { 00:08:17.572 "name": "BaseBdev2", 00:08:17.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.572 "is_configured": false, 00:08:17.572 "data_offset": 0, 00:08:17.572 "data_size": 0 00:08:17.572 } 00:08:17.572 ] 00:08:17.572 }' 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.572 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.870 [2024-10-13 02:22:36.491129] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:17.870 [2024-10-13 02:22:36.491250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.870 [2024-10-13 02:22:36.499097] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:17.870 [2024-10-13 02:22:36.499148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:17.870 [2024-10-13 02:22:36.499166] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:17.870 [2024-10-13 02:22:36.499176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.870 [2024-10-13 02:22:36.516471] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.870 BaseBdev1 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.870 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.870 [ 00:08:17.870 { 00:08:17.870 "name": "BaseBdev1", 00:08:17.870 "aliases": [ 00:08:17.870 "26fa37d0-b9d2-4055-a966-00a8992ab80d" 00:08:17.870 ], 00:08:17.870 "product_name": "Malloc disk", 00:08:17.870 "block_size": 512, 00:08:17.870 "num_blocks": 65536, 00:08:17.870 "uuid": "26fa37d0-b9d2-4055-a966-00a8992ab80d", 00:08:17.870 "assigned_rate_limits": { 00:08:17.870 "rw_ios_per_sec": 0, 00:08:17.870 "rw_mbytes_per_sec": 0, 00:08:17.870 "r_mbytes_per_sec": 0, 00:08:17.870 "w_mbytes_per_sec": 0 00:08:17.870 }, 00:08:17.870 "claimed": true, 00:08:17.870 "claim_type": "exclusive_write", 00:08:17.870 "zoned": false, 00:08:17.870 "supported_io_types": { 00:08:17.870 "read": true, 00:08:17.870 "write": true, 00:08:17.870 "unmap": true, 00:08:17.870 "flush": true, 00:08:17.870 "reset": true, 00:08:17.871 "nvme_admin": false, 00:08:17.871 "nvme_io": false, 00:08:17.871 "nvme_io_md": false, 00:08:17.871 "write_zeroes": true, 00:08:17.871 "zcopy": true, 00:08:17.871 "get_zone_info": false, 00:08:17.871 "zone_management": false, 00:08:17.871 "zone_append": false, 00:08:17.871 "compare": false, 00:08:18.130 "compare_and_write": false, 00:08:18.130 "abort": true, 00:08:18.130 "seek_hole": false, 00:08:18.130 "seek_data": false, 00:08:18.130 "copy": true, 00:08:18.130 "nvme_iov_md": false 00:08:18.130 }, 00:08:18.130 "memory_domains": [ 00:08:18.130 { 00:08:18.130 "dma_device_id": "system", 00:08:18.130 "dma_device_type": 1 00:08:18.130 }, 00:08:18.130 { 00:08:18.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.130 "dma_device_type": 2 00:08:18.130 } 00:08:18.130 ], 00:08:18.130 "driver_specific": {} 00:08:18.130 } 00:08:18.130 ] 00:08:18.130 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.130 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:18.130 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:18.130 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.130 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.130 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.130 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.130 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.130 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.130 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.130 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.130 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.130 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.130 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.130 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.130 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.130 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.130 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.130 "name": "Existed_Raid", 00:08:18.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.130 "strip_size_kb": 64, 00:08:18.130 "state": "configuring", 00:08:18.130 "raid_level": "concat", 00:08:18.130 "superblock": false, 00:08:18.130 "num_base_bdevs": 2, 00:08:18.130 "num_base_bdevs_discovered": 1, 00:08:18.130 "num_base_bdevs_operational": 2, 00:08:18.130 "base_bdevs_list": [ 00:08:18.130 { 00:08:18.130 "name": "BaseBdev1", 00:08:18.130 "uuid": "26fa37d0-b9d2-4055-a966-00a8992ab80d", 00:08:18.130 "is_configured": true, 00:08:18.130 "data_offset": 0, 00:08:18.130 "data_size": 65536 00:08:18.130 }, 00:08:18.130 { 00:08:18.130 "name": "BaseBdev2", 00:08:18.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.130 "is_configured": false, 00:08:18.131 "data_offset": 0, 00:08:18.131 "data_size": 0 00:08:18.131 } 00:08:18.131 ] 00:08:18.131 }' 00:08:18.131 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.131 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.390 [2024-10-13 02:22:36.939801] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:18.390 [2024-10-13 02:22:36.939933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.390 [2024-10-13 02:22:36.951825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.390 [2024-10-13 02:22:36.953786] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:18.390 [2024-10-13 02:22:36.953828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.390 02:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.390 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.390 "name": "Existed_Raid", 00:08:18.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.391 "strip_size_kb": 64, 00:08:18.391 "state": "configuring", 00:08:18.391 "raid_level": "concat", 00:08:18.391 "superblock": false, 00:08:18.391 "num_base_bdevs": 2, 00:08:18.391 "num_base_bdevs_discovered": 1, 00:08:18.391 "num_base_bdevs_operational": 2, 00:08:18.391 "base_bdevs_list": [ 00:08:18.391 { 00:08:18.391 "name": "BaseBdev1", 00:08:18.391 "uuid": "26fa37d0-b9d2-4055-a966-00a8992ab80d", 00:08:18.391 "is_configured": true, 00:08:18.391 "data_offset": 0, 00:08:18.391 "data_size": 65536 00:08:18.391 }, 00:08:18.391 { 00:08:18.391 "name": "BaseBdev2", 00:08:18.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.391 "is_configured": false, 00:08:18.391 "data_offset": 0, 00:08:18.391 "data_size": 0 00:08:18.391 } 00:08:18.391 ] 00:08:18.391 }' 00:08:18.391 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.391 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.959 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:18.959 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.959 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.959 [2024-10-13 02:22:37.404703] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.959 [2024-10-13 02:22:37.404853] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:18.959 [2024-10-13 02:22:37.404906] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:18.959 [2024-10-13 02:22:37.405355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:18.959 [2024-10-13 02:22:37.405594] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:18.959 [2024-10-13 02:22:37.405658] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:18.960 [2024-10-13 02:22:37.406015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.960 BaseBdev2 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.960 [ 00:08:18.960 { 00:08:18.960 "name": "BaseBdev2", 00:08:18.960 "aliases": [ 00:08:18.960 "9b060eec-4f9c-4682-9647-fbea5418fcf4" 00:08:18.960 ], 00:08:18.960 "product_name": "Malloc disk", 00:08:18.960 "block_size": 512, 00:08:18.960 "num_blocks": 65536, 00:08:18.960 "uuid": "9b060eec-4f9c-4682-9647-fbea5418fcf4", 00:08:18.960 "assigned_rate_limits": { 00:08:18.960 "rw_ios_per_sec": 0, 00:08:18.960 "rw_mbytes_per_sec": 0, 00:08:18.960 "r_mbytes_per_sec": 0, 00:08:18.960 "w_mbytes_per_sec": 0 00:08:18.960 }, 00:08:18.960 "claimed": true, 00:08:18.960 "claim_type": "exclusive_write", 00:08:18.960 "zoned": false, 00:08:18.960 "supported_io_types": { 00:08:18.960 "read": true, 00:08:18.960 "write": true, 00:08:18.960 "unmap": true, 00:08:18.960 "flush": true, 00:08:18.960 "reset": true, 00:08:18.960 "nvme_admin": false, 00:08:18.960 "nvme_io": false, 00:08:18.960 "nvme_io_md": false, 00:08:18.960 "write_zeroes": true, 00:08:18.960 "zcopy": true, 00:08:18.960 "get_zone_info": false, 00:08:18.960 "zone_management": false, 00:08:18.960 "zone_append": false, 00:08:18.960 "compare": false, 00:08:18.960 "compare_and_write": false, 00:08:18.960 "abort": true, 00:08:18.960 "seek_hole": false, 00:08:18.960 "seek_data": false, 00:08:18.960 "copy": true, 00:08:18.960 "nvme_iov_md": false 00:08:18.960 }, 00:08:18.960 "memory_domains": [ 00:08:18.960 { 00:08:18.960 "dma_device_id": "system", 00:08:18.960 "dma_device_type": 1 00:08:18.960 }, 00:08:18.960 { 00:08:18.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.960 "dma_device_type": 2 00:08:18.960 } 00:08:18.960 ], 00:08:18.960 "driver_specific": {} 00:08:18.960 } 00:08:18.960 ] 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.960 "name": "Existed_Raid", 00:08:18.960 "uuid": "e4099a03-bc0c-4a1a-bc3f-23232bef1750", 00:08:18.960 "strip_size_kb": 64, 00:08:18.960 "state": "online", 00:08:18.960 "raid_level": "concat", 00:08:18.960 "superblock": false, 00:08:18.960 "num_base_bdevs": 2, 00:08:18.960 "num_base_bdevs_discovered": 2, 00:08:18.960 "num_base_bdevs_operational": 2, 00:08:18.960 "base_bdevs_list": [ 00:08:18.960 { 00:08:18.960 "name": "BaseBdev1", 00:08:18.960 "uuid": "26fa37d0-b9d2-4055-a966-00a8992ab80d", 00:08:18.960 "is_configured": true, 00:08:18.960 "data_offset": 0, 00:08:18.960 "data_size": 65536 00:08:18.960 }, 00:08:18.960 { 00:08:18.960 "name": "BaseBdev2", 00:08:18.960 "uuid": "9b060eec-4f9c-4682-9647-fbea5418fcf4", 00:08:18.960 "is_configured": true, 00:08:18.960 "data_offset": 0, 00:08:18.960 "data_size": 65536 00:08:18.960 } 00:08:18.960 ] 00:08:18.960 }' 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.960 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.529 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:19.529 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:19.529 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:19.529 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:19.529 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:19.529 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:19.529 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:19.529 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:19.529 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.529 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.529 [2024-10-13 02:22:37.932193] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.529 02:22:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.529 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:19.529 "name": "Existed_Raid", 00:08:19.529 "aliases": [ 00:08:19.529 "e4099a03-bc0c-4a1a-bc3f-23232bef1750" 00:08:19.529 ], 00:08:19.529 "product_name": "Raid Volume", 00:08:19.529 "block_size": 512, 00:08:19.529 "num_blocks": 131072, 00:08:19.529 "uuid": "e4099a03-bc0c-4a1a-bc3f-23232bef1750", 00:08:19.529 "assigned_rate_limits": { 00:08:19.529 "rw_ios_per_sec": 0, 00:08:19.529 "rw_mbytes_per_sec": 0, 00:08:19.529 "r_mbytes_per_sec": 0, 00:08:19.529 "w_mbytes_per_sec": 0 00:08:19.529 }, 00:08:19.529 "claimed": false, 00:08:19.529 "zoned": false, 00:08:19.529 "supported_io_types": { 00:08:19.529 "read": true, 00:08:19.529 "write": true, 00:08:19.529 "unmap": true, 00:08:19.529 "flush": true, 00:08:19.529 "reset": true, 00:08:19.529 "nvme_admin": false, 00:08:19.529 "nvme_io": false, 00:08:19.529 "nvme_io_md": false, 00:08:19.529 "write_zeroes": true, 00:08:19.529 "zcopy": false, 00:08:19.529 "get_zone_info": false, 00:08:19.529 "zone_management": false, 00:08:19.529 "zone_append": false, 00:08:19.529 "compare": false, 00:08:19.529 "compare_and_write": false, 00:08:19.529 "abort": false, 00:08:19.529 "seek_hole": false, 00:08:19.529 "seek_data": false, 00:08:19.529 "copy": false, 00:08:19.529 "nvme_iov_md": false 00:08:19.529 }, 00:08:19.529 "memory_domains": [ 00:08:19.529 { 00:08:19.529 "dma_device_id": "system", 00:08:19.529 "dma_device_type": 1 00:08:19.529 }, 00:08:19.529 { 00:08:19.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.529 "dma_device_type": 2 00:08:19.529 }, 00:08:19.529 { 00:08:19.529 "dma_device_id": "system", 00:08:19.529 "dma_device_type": 1 00:08:19.529 }, 00:08:19.529 { 00:08:19.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.529 "dma_device_type": 2 00:08:19.529 } 00:08:19.529 ], 00:08:19.529 "driver_specific": { 00:08:19.529 "raid": { 00:08:19.529 "uuid": "e4099a03-bc0c-4a1a-bc3f-23232bef1750", 00:08:19.529 "strip_size_kb": 64, 00:08:19.529 "state": "online", 00:08:19.529 "raid_level": "concat", 00:08:19.529 "superblock": false, 00:08:19.529 "num_base_bdevs": 2, 00:08:19.529 "num_base_bdevs_discovered": 2, 00:08:19.529 "num_base_bdevs_operational": 2, 00:08:19.529 "base_bdevs_list": [ 00:08:19.529 { 00:08:19.529 "name": "BaseBdev1", 00:08:19.529 "uuid": "26fa37d0-b9d2-4055-a966-00a8992ab80d", 00:08:19.529 "is_configured": true, 00:08:19.529 "data_offset": 0, 00:08:19.529 "data_size": 65536 00:08:19.529 }, 00:08:19.529 { 00:08:19.529 "name": "BaseBdev2", 00:08:19.529 "uuid": "9b060eec-4f9c-4682-9647-fbea5418fcf4", 00:08:19.529 "is_configured": true, 00:08:19.529 "data_offset": 0, 00:08:19.529 "data_size": 65536 00:08:19.529 } 00:08:19.529 ] 00:08:19.529 } 00:08:19.529 } 00:08:19.529 }' 00:08:19.529 02:22:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:19.529 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:19.529 BaseBdev2' 00:08:19.529 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.529 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:19.529 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.529 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.529 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:19.529 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.529 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.529 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.530 [2024-10-13 02:22:38.171486] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:19.530 [2024-10-13 02:22:38.171563] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.530 [2024-10-13 02:22:38.171629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.530 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.789 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.789 "name": "Existed_Raid", 00:08:19.789 "uuid": "e4099a03-bc0c-4a1a-bc3f-23232bef1750", 00:08:19.789 "strip_size_kb": 64, 00:08:19.789 "state": "offline", 00:08:19.789 "raid_level": "concat", 00:08:19.789 "superblock": false, 00:08:19.789 "num_base_bdevs": 2, 00:08:19.789 "num_base_bdevs_discovered": 1, 00:08:19.789 "num_base_bdevs_operational": 1, 00:08:19.789 "base_bdevs_list": [ 00:08:19.789 { 00:08:19.789 "name": null, 00:08:19.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.789 "is_configured": false, 00:08:19.789 "data_offset": 0, 00:08:19.789 "data_size": 65536 00:08:19.789 }, 00:08:19.789 { 00:08:19.789 "name": "BaseBdev2", 00:08:19.789 "uuid": "9b060eec-4f9c-4682-9647-fbea5418fcf4", 00:08:19.789 "is_configured": true, 00:08:19.789 "data_offset": 0, 00:08:19.789 "data_size": 65536 00:08:19.789 } 00:08:19.789 ] 00:08:19.789 }' 00:08:19.789 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.789 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.049 [2024-10-13 02:22:38.650278] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:20.049 [2024-10-13 02:22:38.650388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72928 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72928 ']' 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72928 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.049 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72928 00:08:20.309 killing process with pid 72928 00:08:20.309 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.309 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.309 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72928' 00:08:20.309 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72928 00:08:20.309 [2024-10-13 02:22:38.760538] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.309 02:22:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72928 00:08:20.309 [2024-10-13 02:22:38.761525] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.569 ************************************ 00:08:20.569 END TEST raid_state_function_test 00:08:20.569 ************************************ 00:08:20.569 02:22:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:20.569 00:08:20.569 real 0m3.851s 00:08:20.569 user 0m6.031s 00:08:20.569 sys 0m0.759s 00:08:20.569 02:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.569 02:22:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.569 02:22:39 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:20.569 02:22:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:20.569 02:22:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.569 02:22:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.569 ************************************ 00:08:20.569 START TEST raid_state_function_test_sb 00:08:20.569 ************************************ 00:08:20.569 02:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73165 00:08:20.570 Process raid pid: 73165 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73165' 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73165 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73165 ']' 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.570 02:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.570 [2024-10-13 02:22:39.158464] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:20.570 [2024-10-13 02:22:39.158644] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.829 [2024-10-13 02:22:39.302768] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.829 [2024-10-13 02:22:39.349643] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.829 [2024-10-13 02:22:39.392823] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.829 [2024-10-13 02:22:39.392964] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.397 [2024-10-13 02:22:39.986451] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.397 [2024-10-13 02:22:39.986562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.397 [2024-10-13 02:22:39.986603] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.397 [2024-10-13 02:22:39.986626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.397 02:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.398 02:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.398 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.398 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.398 "name": "Existed_Raid", 00:08:21.398 "uuid": "f30e06a8-132e-48db-9b0f-77b9e1886526", 00:08:21.398 "strip_size_kb": 64, 00:08:21.398 "state": "configuring", 00:08:21.398 "raid_level": "concat", 00:08:21.398 "superblock": true, 00:08:21.398 "num_base_bdevs": 2, 00:08:21.398 "num_base_bdevs_discovered": 0, 00:08:21.398 "num_base_bdevs_operational": 2, 00:08:21.398 "base_bdevs_list": [ 00:08:21.398 { 00:08:21.398 "name": "BaseBdev1", 00:08:21.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.398 "is_configured": false, 00:08:21.398 "data_offset": 0, 00:08:21.398 "data_size": 0 00:08:21.398 }, 00:08:21.398 { 00:08:21.398 "name": "BaseBdev2", 00:08:21.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.398 "is_configured": false, 00:08:21.398 "data_offset": 0, 00:08:21.398 "data_size": 0 00:08:21.398 } 00:08:21.398 ] 00:08:21.398 }' 00:08:21.398 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.398 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.966 [2024-10-13 02:22:40.353732] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.966 [2024-10-13 02:22:40.353778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.966 [2024-10-13 02:22:40.365721] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.966 [2024-10-13 02:22:40.365766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.966 [2024-10-13 02:22:40.365783] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.966 [2024-10-13 02:22:40.365793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.966 [2024-10-13 02:22:40.386759] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:21.966 BaseBdev1 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.966 [ 00:08:21.966 { 00:08:21.966 "name": "BaseBdev1", 00:08:21.966 "aliases": [ 00:08:21.966 "b72625cb-9168-437b-a4b9-bd7c9d94a797" 00:08:21.966 ], 00:08:21.966 "product_name": "Malloc disk", 00:08:21.966 "block_size": 512, 00:08:21.966 "num_blocks": 65536, 00:08:21.966 "uuid": "b72625cb-9168-437b-a4b9-bd7c9d94a797", 00:08:21.966 "assigned_rate_limits": { 00:08:21.966 "rw_ios_per_sec": 0, 00:08:21.966 "rw_mbytes_per_sec": 0, 00:08:21.966 "r_mbytes_per_sec": 0, 00:08:21.966 "w_mbytes_per_sec": 0 00:08:21.966 }, 00:08:21.966 "claimed": true, 00:08:21.966 "claim_type": "exclusive_write", 00:08:21.966 "zoned": false, 00:08:21.966 "supported_io_types": { 00:08:21.966 "read": true, 00:08:21.966 "write": true, 00:08:21.966 "unmap": true, 00:08:21.966 "flush": true, 00:08:21.966 "reset": true, 00:08:21.966 "nvme_admin": false, 00:08:21.966 "nvme_io": false, 00:08:21.966 "nvme_io_md": false, 00:08:21.966 "write_zeroes": true, 00:08:21.966 "zcopy": true, 00:08:21.966 "get_zone_info": false, 00:08:21.966 "zone_management": false, 00:08:21.966 "zone_append": false, 00:08:21.966 "compare": false, 00:08:21.966 "compare_and_write": false, 00:08:21.966 "abort": true, 00:08:21.966 "seek_hole": false, 00:08:21.966 "seek_data": false, 00:08:21.966 "copy": true, 00:08:21.966 "nvme_iov_md": false 00:08:21.966 }, 00:08:21.966 "memory_domains": [ 00:08:21.966 { 00:08:21.966 "dma_device_id": "system", 00:08:21.966 "dma_device_type": 1 00:08:21.966 }, 00:08:21.966 { 00:08:21.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.966 "dma_device_type": 2 00:08:21.966 } 00:08:21.966 ], 00:08:21.966 "driver_specific": {} 00:08:21.966 } 00:08:21.966 ] 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.966 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.966 "name": "Existed_Raid", 00:08:21.966 "uuid": "2acbdb6f-b538-4c25-8efd-8801f76c134b", 00:08:21.966 "strip_size_kb": 64, 00:08:21.966 "state": "configuring", 00:08:21.966 "raid_level": "concat", 00:08:21.966 "superblock": true, 00:08:21.966 "num_base_bdevs": 2, 00:08:21.967 "num_base_bdevs_discovered": 1, 00:08:21.967 "num_base_bdevs_operational": 2, 00:08:21.967 "base_bdevs_list": [ 00:08:21.967 { 00:08:21.967 "name": "BaseBdev1", 00:08:21.967 "uuid": "b72625cb-9168-437b-a4b9-bd7c9d94a797", 00:08:21.967 "is_configured": true, 00:08:21.967 "data_offset": 2048, 00:08:21.967 "data_size": 63488 00:08:21.967 }, 00:08:21.967 { 00:08:21.967 "name": "BaseBdev2", 00:08:21.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.967 "is_configured": false, 00:08:21.967 "data_offset": 0, 00:08:21.967 "data_size": 0 00:08:21.967 } 00:08:21.967 ] 00:08:21.967 }' 00:08:21.967 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.967 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.226 [2024-10-13 02:22:40.834027] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.226 [2024-10-13 02:22:40.834124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.226 [2024-10-13 02:22:40.842055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.226 [2024-10-13 02:22:40.843989] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.226 [2024-10-13 02:22:40.844063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.226 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.226 "name": "Existed_Raid", 00:08:22.226 "uuid": "13d6a6e2-fc92-4a4d-8377-c29975b4a09d", 00:08:22.226 "strip_size_kb": 64, 00:08:22.226 "state": "configuring", 00:08:22.226 "raid_level": "concat", 00:08:22.226 "superblock": true, 00:08:22.226 "num_base_bdevs": 2, 00:08:22.226 "num_base_bdevs_discovered": 1, 00:08:22.227 "num_base_bdevs_operational": 2, 00:08:22.227 "base_bdevs_list": [ 00:08:22.227 { 00:08:22.227 "name": "BaseBdev1", 00:08:22.227 "uuid": "b72625cb-9168-437b-a4b9-bd7c9d94a797", 00:08:22.227 "is_configured": true, 00:08:22.227 "data_offset": 2048, 00:08:22.227 "data_size": 63488 00:08:22.227 }, 00:08:22.227 { 00:08:22.227 "name": "BaseBdev2", 00:08:22.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.227 "is_configured": false, 00:08:22.227 "data_offset": 0, 00:08:22.227 "data_size": 0 00:08:22.227 } 00:08:22.227 ] 00:08:22.227 }' 00:08:22.227 02:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.227 02:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.797 [2024-10-13 02:22:41.259353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.797 [2024-10-13 02:22:41.259657] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:22.797 [2024-10-13 02:22:41.259716] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:22.797 [2024-10-13 02:22:41.260093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:22.797 BaseBdev2 00:08:22.797 [2024-10-13 02:22:41.260321] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:22.797 [2024-10-13 02:22:41.260351] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:22.797 [2024-10-13 02:22:41.260498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.797 [ 00:08:22.797 { 00:08:22.797 "name": "BaseBdev2", 00:08:22.797 "aliases": [ 00:08:22.797 "edf5ae38-1abd-4d15-919d-8e2deda87819" 00:08:22.797 ], 00:08:22.797 "product_name": "Malloc disk", 00:08:22.797 "block_size": 512, 00:08:22.797 "num_blocks": 65536, 00:08:22.797 "uuid": "edf5ae38-1abd-4d15-919d-8e2deda87819", 00:08:22.797 "assigned_rate_limits": { 00:08:22.797 "rw_ios_per_sec": 0, 00:08:22.797 "rw_mbytes_per_sec": 0, 00:08:22.797 "r_mbytes_per_sec": 0, 00:08:22.797 "w_mbytes_per_sec": 0 00:08:22.797 }, 00:08:22.797 "claimed": true, 00:08:22.797 "claim_type": "exclusive_write", 00:08:22.797 "zoned": false, 00:08:22.797 "supported_io_types": { 00:08:22.797 "read": true, 00:08:22.797 "write": true, 00:08:22.797 "unmap": true, 00:08:22.797 "flush": true, 00:08:22.797 "reset": true, 00:08:22.797 "nvme_admin": false, 00:08:22.797 "nvme_io": false, 00:08:22.797 "nvme_io_md": false, 00:08:22.797 "write_zeroes": true, 00:08:22.797 "zcopy": true, 00:08:22.797 "get_zone_info": false, 00:08:22.797 "zone_management": false, 00:08:22.797 "zone_append": false, 00:08:22.797 "compare": false, 00:08:22.797 "compare_and_write": false, 00:08:22.797 "abort": true, 00:08:22.797 "seek_hole": false, 00:08:22.797 "seek_data": false, 00:08:22.797 "copy": true, 00:08:22.797 "nvme_iov_md": false 00:08:22.797 }, 00:08:22.797 "memory_domains": [ 00:08:22.797 { 00:08:22.797 "dma_device_id": "system", 00:08:22.797 "dma_device_type": 1 00:08:22.797 }, 00:08:22.797 { 00:08:22.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.797 "dma_device_type": 2 00:08:22.797 } 00:08:22.797 ], 00:08:22.797 "driver_specific": {} 00:08:22.797 } 00:08:22.797 ] 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.797 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.797 "name": "Existed_Raid", 00:08:22.797 "uuid": "13d6a6e2-fc92-4a4d-8377-c29975b4a09d", 00:08:22.797 "strip_size_kb": 64, 00:08:22.797 "state": "online", 00:08:22.798 "raid_level": "concat", 00:08:22.798 "superblock": true, 00:08:22.798 "num_base_bdevs": 2, 00:08:22.798 "num_base_bdevs_discovered": 2, 00:08:22.798 "num_base_bdevs_operational": 2, 00:08:22.798 "base_bdevs_list": [ 00:08:22.798 { 00:08:22.798 "name": "BaseBdev1", 00:08:22.798 "uuid": "b72625cb-9168-437b-a4b9-bd7c9d94a797", 00:08:22.798 "is_configured": true, 00:08:22.798 "data_offset": 2048, 00:08:22.798 "data_size": 63488 00:08:22.798 }, 00:08:22.798 { 00:08:22.798 "name": "BaseBdev2", 00:08:22.798 "uuid": "edf5ae38-1abd-4d15-919d-8e2deda87819", 00:08:22.798 "is_configured": true, 00:08:22.798 "data_offset": 2048, 00:08:22.798 "data_size": 63488 00:08:22.798 } 00:08:22.798 ] 00:08:22.798 }' 00:08:22.798 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.798 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.057 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:23.057 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:23.057 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:23.057 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:23.057 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:23.057 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:23.057 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:23.057 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.057 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:23.057 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.057 [2024-10-13 02:22:41.730916] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.319 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.319 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:23.319 "name": "Existed_Raid", 00:08:23.319 "aliases": [ 00:08:23.319 "13d6a6e2-fc92-4a4d-8377-c29975b4a09d" 00:08:23.319 ], 00:08:23.319 "product_name": "Raid Volume", 00:08:23.319 "block_size": 512, 00:08:23.319 "num_blocks": 126976, 00:08:23.319 "uuid": "13d6a6e2-fc92-4a4d-8377-c29975b4a09d", 00:08:23.319 "assigned_rate_limits": { 00:08:23.319 "rw_ios_per_sec": 0, 00:08:23.319 "rw_mbytes_per_sec": 0, 00:08:23.319 "r_mbytes_per_sec": 0, 00:08:23.319 "w_mbytes_per_sec": 0 00:08:23.319 }, 00:08:23.319 "claimed": false, 00:08:23.319 "zoned": false, 00:08:23.319 "supported_io_types": { 00:08:23.319 "read": true, 00:08:23.319 "write": true, 00:08:23.319 "unmap": true, 00:08:23.320 "flush": true, 00:08:23.320 "reset": true, 00:08:23.320 "nvme_admin": false, 00:08:23.320 "nvme_io": false, 00:08:23.320 "nvme_io_md": false, 00:08:23.320 "write_zeroes": true, 00:08:23.320 "zcopy": false, 00:08:23.320 "get_zone_info": false, 00:08:23.320 "zone_management": false, 00:08:23.320 "zone_append": false, 00:08:23.320 "compare": false, 00:08:23.320 "compare_and_write": false, 00:08:23.320 "abort": false, 00:08:23.320 "seek_hole": false, 00:08:23.320 "seek_data": false, 00:08:23.320 "copy": false, 00:08:23.320 "nvme_iov_md": false 00:08:23.320 }, 00:08:23.320 "memory_domains": [ 00:08:23.320 { 00:08:23.320 "dma_device_id": "system", 00:08:23.320 "dma_device_type": 1 00:08:23.320 }, 00:08:23.320 { 00:08:23.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.320 "dma_device_type": 2 00:08:23.320 }, 00:08:23.320 { 00:08:23.320 "dma_device_id": "system", 00:08:23.320 "dma_device_type": 1 00:08:23.320 }, 00:08:23.320 { 00:08:23.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.320 "dma_device_type": 2 00:08:23.320 } 00:08:23.320 ], 00:08:23.320 "driver_specific": { 00:08:23.320 "raid": { 00:08:23.320 "uuid": "13d6a6e2-fc92-4a4d-8377-c29975b4a09d", 00:08:23.320 "strip_size_kb": 64, 00:08:23.320 "state": "online", 00:08:23.320 "raid_level": "concat", 00:08:23.320 "superblock": true, 00:08:23.320 "num_base_bdevs": 2, 00:08:23.320 "num_base_bdevs_discovered": 2, 00:08:23.320 "num_base_bdevs_operational": 2, 00:08:23.320 "base_bdevs_list": [ 00:08:23.320 { 00:08:23.320 "name": "BaseBdev1", 00:08:23.320 "uuid": "b72625cb-9168-437b-a4b9-bd7c9d94a797", 00:08:23.320 "is_configured": true, 00:08:23.320 "data_offset": 2048, 00:08:23.320 "data_size": 63488 00:08:23.320 }, 00:08:23.320 { 00:08:23.320 "name": "BaseBdev2", 00:08:23.320 "uuid": "edf5ae38-1abd-4d15-919d-8e2deda87819", 00:08:23.320 "is_configured": true, 00:08:23.320 "data_offset": 2048, 00:08:23.320 "data_size": 63488 00:08:23.320 } 00:08:23.320 ] 00:08:23.320 } 00:08:23.320 } 00:08:23.320 }' 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:23.320 BaseBdev2' 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.320 [2024-10-13 02:22:41.950297] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:23.320 [2024-10-13 02:22:41.950368] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.320 [2024-10-13 02:22:41.950423] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.320 02:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.580 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.580 "name": "Existed_Raid", 00:08:23.580 "uuid": "13d6a6e2-fc92-4a4d-8377-c29975b4a09d", 00:08:23.580 "strip_size_kb": 64, 00:08:23.580 "state": "offline", 00:08:23.580 "raid_level": "concat", 00:08:23.580 "superblock": true, 00:08:23.580 "num_base_bdevs": 2, 00:08:23.580 "num_base_bdevs_discovered": 1, 00:08:23.580 "num_base_bdevs_operational": 1, 00:08:23.580 "base_bdevs_list": [ 00:08:23.580 { 00:08:23.580 "name": null, 00:08:23.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.580 "is_configured": false, 00:08:23.580 "data_offset": 0, 00:08:23.580 "data_size": 63488 00:08:23.580 }, 00:08:23.580 { 00:08:23.580 "name": "BaseBdev2", 00:08:23.580 "uuid": "edf5ae38-1abd-4d15-919d-8e2deda87819", 00:08:23.580 "is_configured": true, 00:08:23.580 "data_offset": 2048, 00:08:23.580 "data_size": 63488 00:08:23.580 } 00:08:23.580 ] 00:08:23.580 }' 00:08:23.580 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.580 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.855 [2024-10-13 02:22:42.448919] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:23.855 [2024-10-13 02:22:42.449015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73165 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73165 ']' 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73165 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:23.855 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73165 00:08:24.115 killing process with pid 73165 00:08:24.115 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:24.115 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:24.115 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73165' 00:08:24.115 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73165 00:08:24.115 [2024-10-13 02:22:42.558801] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.115 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73165 00:08:24.115 [2024-10-13 02:22:42.559791] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:24.375 02:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:24.375 00:08:24.375 real 0m3.738s 00:08:24.375 user 0m5.816s 00:08:24.375 sys 0m0.772s 00:08:24.375 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.375 ************************************ 00:08:24.375 END TEST raid_state_function_test_sb 00:08:24.375 ************************************ 00:08:24.375 02:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.375 02:22:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:24.375 02:22:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:24.375 02:22:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.375 02:22:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:24.375 ************************************ 00:08:24.375 START TEST raid_superblock_test 00:08:24.375 ************************************ 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73406 00:08:24.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73406 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73406 ']' 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.375 02:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.375 [2024-10-13 02:22:42.966306] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:24.375 [2024-10-13 02:22:42.966420] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73406 ] 00:08:24.635 [2024-10-13 02:22:43.111641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.635 [2024-10-13 02:22:43.157578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.635 [2024-10-13 02:22:43.200464] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.635 [2024-10-13 02:22:43.200505] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.204 malloc1 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.204 [2024-10-13 02:22:43.839001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:25.204 [2024-10-13 02:22:43.839109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.204 [2024-10-13 02:22:43.839152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:25.204 [2024-10-13 02:22:43.839188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.204 [2024-10-13 02:22:43.841342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.204 [2024-10-13 02:22:43.841418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:25.204 pt1 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.204 malloc2 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.204 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.464 [2024-10-13 02:22:43.887077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:25.464 [2024-10-13 02:22:43.887297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.464 [2024-10-13 02:22:43.887342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:25.464 [2024-10-13 02:22:43.887369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.464 [2024-10-13 02:22:43.892245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.464 pt2 00:08:25.464 [2024-10-13 02:22:43.892410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.464 [2024-10-13 02:22:43.896816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:25.464 [2024-10-13 02:22:43.899731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:25.464 [2024-10-13 02:22:43.900025] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:25.464 [2024-10-13 02:22:43.900057] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:25.464 [2024-10-13 02:22:43.900456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:25.464 [2024-10-13 02:22:43.900657] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:25.464 [2024-10-13 02:22:43.900673] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:25.464 [2024-10-13 02:22:43.900945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.464 "name": "raid_bdev1", 00:08:25.464 "uuid": "4872aa33-6755-4af5-9619-6539e6614711", 00:08:25.464 "strip_size_kb": 64, 00:08:25.464 "state": "online", 00:08:25.464 "raid_level": "concat", 00:08:25.464 "superblock": true, 00:08:25.464 "num_base_bdevs": 2, 00:08:25.464 "num_base_bdevs_discovered": 2, 00:08:25.464 "num_base_bdevs_operational": 2, 00:08:25.464 "base_bdevs_list": [ 00:08:25.464 { 00:08:25.464 "name": "pt1", 00:08:25.464 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:25.464 "is_configured": true, 00:08:25.464 "data_offset": 2048, 00:08:25.464 "data_size": 63488 00:08:25.464 }, 00:08:25.464 { 00:08:25.464 "name": "pt2", 00:08:25.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:25.464 "is_configured": true, 00:08:25.464 "data_offset": 2048, 00:08:25.464 "data_size": 63488 00:08:25.464 } 00:08:25.464 ] 00:08:25.464 }' 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.464 02:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.724 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:25.724 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:25.724 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:25.724 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:25.724 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:25.724 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:25.724 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:25.724 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.724 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.724 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:25.724 [2024-10-13 02:22:44.280509] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.724 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.724 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:25.724 "name": "raid_bdev1", 00:08:25.724 "aliases": [ 00:08:25.724 "4872aa33-6755-4af5-9619-6539e6614711" 00:08:25.724 ], 00:08:25.724 "product_name": "Raid Volume", 00:08:25.724 "block_size": 512, 00:08:25.724 "num_blocks": 126976, 00:08:25.724 "uuid": "4872aa33-6755-4af5-9619-6539e6614711", 00:08:25.724 "assigned_rate_limits": { 00:08:25.724 "rw_ios_per_sec": 0, 00:08:25.724 "rw_mbytes_per_sec": 0, 00:08:25.724 "r_mbytes_per_sec": 0, 00:08:25.724 "w_mbytes_per_sec": 0 00:08:25.724 }, 00:08:25.724 "claimed": false, 00:08:25.724 "zoned": false, 00:08:25.724 "supported_io_types": { 00:08:25.724 "read": true, 00:08:25.724 "write": true, 00:08:25.724 "unmap": true, 00:08:25.724 "flush": true, 00:08:25.724 "reset": true, 00:08:25.724 "nvme_admin": false, 00:08:25.724 "nvme_io": false, 00:08:25.724 "nvme_io_md": false, 00:08:25.724 "write_zeroes": true, 00:08:25.724 "zcopy": false, 00:08:25.724 "get_zone_info": false, 00:08:25.724 "zone_management": false, 00:08:25.724 "zone_append": false, 00:08:25.724 "compare": false, 00:08:25.724 "compare_and_write": false, 00:08:25.724 "abort": false, 00:08:25.724 "seek_hole": false, 00:08:25.724 "seek_data": false, 00:08:25.724 "copy": false, 00:08:25.724 "nvme_iov_md": false 00:08:25.724 }, 00:08:25.724 "memory_domains": [ 00:08:25.724 { 00:08:25.724 "dma_device_id": "system", 00:08:25.724 "dma_device_type": 1 00:08:25.724 }, 00:08:25.724 { 00:08:25.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.724 "dma_device_type": 2 00:08:25.724 }, 00:08:25.724 { 00:08:25.724 "dma_device_id": "system", 00:08:25.724 "dma_device_type": 1 00:08:25.724 }, 00:08:25.724 { 00:08:25.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.724 "dma_device_type": 2 00:08:25.724 } 00:08:25.724 ], 00:08:25.724 "driver_specific": { 00:08:25.724 "raid": { 00:08:25.724 "uuid": "4872aa33-6755-4af5-9619-6539e6614711", 00:08:25.724 "strip_size_kb": 64, 00:08:25.724 "state": "online", 00:08:25.724 "raid_level": "concat", 00:08:25.724 "superblock": true, 00:08:25.724 "num_base_bdevs": 2, 00:08:25.724 "num_base_bdevs_discovered": 2, 00:08:25.724 "num_base_bdevs_operational": 2, 00:08:25.724 "base_bdevs_list": [ 00:08:25.724 { 00:08:25.724 "name": "pt1", 00:08:25.724 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:25.724 "is_configured": true, 00:08:25.724 "data_offset": 2048, 00:08:25.724 "data_size": 63488 00:08:25.724 }, 00:08:25.724 { 00:08:25.724 "name": "pt2", 00:08:25.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:25.724 "is_configured": true, 00:08:25.724 "data_offset": 2048, 00:08:25.724 "data_size": 63488 00:08:25.724 } 00:08:25.724 ] 00:08:25.724 } 00:08:25.724 } 00:08:25.724 }' 00:08:25.724 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:25.724 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:25.724 pt2' 00:08:25.724 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.984 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.984 [2024-10-13 02:22:44.500151] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4872aa33-6755-4af5-9619-6539e6614711 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4872aa33-6755-4af5-9619-6539e6614711 ']' 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.985 [2024-10-13 02:22:44.543779] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:25.985 [2024-10-13 02:22:44.543848] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.985 [2024-10-13 02:22:44.543974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.985 [2024-10-13 02:22:44.544074] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.985 [2024-10-13 02:22:44.544136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.985 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.245 [2024-10-13 02:22:44.671595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:26.245 [2024-10-13 02:22:44.673731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:26.245 [2024-10-13 02:22:44.673842] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:26.245 [2024-10-13 02:22:44.673936] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:26.245 [2024-10-13 02:22:44.673999] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:26.245 [2024-10-13 02:22:44.674050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:26.245 request: 00:08:26.245 { 00:08:26.245 "name": "raid_bdev1", 00:08:26.245 "raid_level": "concat", 00:08:26.245 "base_bdevs": [ 00:08:26.245 "malloc1", 00:08:26.245 "malloc2" 00:08:26.245 ], 00:08:26.245 "strip_size_kb": 64, 00:08:26.245 "superblock": false, 00:08:26.245 "method": "bdev_raid_create", 00:08:26.245 "req_id": 1 00:08:26.245 } 00:08:26.245 Got JSON-RPC error response 00:08:26.245 response: 00:08:26.245 { 00:08:26.245 "code": -17, 00:08:26.245 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:26.245 } 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.245 [2024-10-13 02:22:44.731460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:26.245 [2024-10-13 02:22:44.731575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.245 [2024-10-13 02:22:44.731631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:26.245 [2024-10-13 02:22:44.731663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.245 [2024-10-13 02:22:44.734135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.245 [2024-10-13 02:22:44.734210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:26.245 [2024-10-13 02:22:44.734324] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:26.245 [2024-10-13 02:22:44.734415] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:26.245 pt1 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.245 "name": "raid_bdev1", 00:08:26.245 "uuid": "4872aa33-6755-4af5-9619-6539e6614711", 00:08:26.245 "strip_size_kb": 64, 00:08:26.245 "state": "configuring", 00:08:26.245 "raid_level": "concat", 00:08:26.245 "superblock": true, 00:08:26.245 "num_base_bdevs": 2, 00:08:26.245 "num_base_bdevs_discovered": 1, 00:08:26.245 "num_base_bdevs_operational": 2, 00:08:26.245 "base_bdevs_list": [ 00:08:26.245 { 00:08:26.245 "name": "pt1", 00:08:26.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:26.245 "is_configured": true, 00:08:26.245 "data_offset": 2048, 00:08:26.245 "data_size": 63488 00:08:26.245 }, 00:08:26.245 { 00:08:26.245 "name": null, 00:08:26.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.245 "is_configured": false, 00:08:26.245 "data_offset": 2048, 00:08:26.245 "data_size": 63488 00:08:26.245 } 00:08:26.245 ] 00:08:26.245 }' 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.245 02:22:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.815 [2024-10-13 02:22:45.206688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:26.815 [2024-10-13 02:22:45.206749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.815 [2024-10-13 02:22:45.206769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:26.815 [2024-10-13 02:22:45.206778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.815 [2024-10-13 02:22:45.207196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.815 [2024-10-13 02:22:45.207215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:26.815 [2024-10-13 02:22:45.207289] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:26.815 [2024-10-13 02:22:45.207308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:26.815 [2024-10-13 02:22:45.207397] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:26.815 [2024-10-13 02:22:45.207406] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:26.815 [2024-10-13 02:22:45.207663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:26.815 [2024-10-13 02:22:45.207784] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:26.815 [2024-10-13 02:22:45.207798] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:26.815 [2024-10-13 02:22:45.207913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.815 pt2 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.815 "name": "raid_bdev1", 00:08:26.815 "uuid": "4872aa33-6755-4af5-9619-6539e6614711", 00:08:26.815 "strip_size_kb": 64, 00:08:26.815 "state": "online", 00:08:26.815 "raid_level": "concat", 00:08:26.815 "superblock": true, 00:08:26.815 "num_base_bdevs": 2, 00:08:26.815 "num_base_bdevs_discovered": 2, 00:08:26.815 "num_base_bdevs_operational": 2, 00:08:26.815 "base_bdevs_list": [ 00:08:26.815 { 00:08:26.815 "name": "pt1", 00:08:26.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:26.815 "is_configured": true, 00:08:26.815 "data_offset": 2048, 00:08:26.815 "data_size": 63488 00:08:26.815 }, 00:08:26.815 { 00:08:26.815 "name": "pt2", 00:08:26.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.815 "is_configured": true, 00:08:26.815 "data_offset": 2048, 00:08:26.815 "data_size": 63488 00:08:26.815 } 00:08:26.815 ] 00:08:26.815 }' 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.815 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.074 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:27.074 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:27.074 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:27.074 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:27.074 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:27.074 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:27.074 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:27.074 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:27.074 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.074 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.074 [2024-10-13 02:22:45.686143] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.074 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.074 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:27.074 "name": "raid_bdev1", 00:08:27.074 "aliases": [ 00:08:27.074 "4872aa33-6755-4af5-9619-6539e6614711" 00:08:27.074 ], 00:08:27.074 "product_name": "Raid Volume", 00:08:27.074 "block_size": 512, 00:08:27.074 "num_blocks": 126976, 00:08:27.074 "uuid": "4872aa33-6755-4af5-9619-6539e6614711", 00:08:27.074 "assigned_rate_limits": { 00:08:27.074 "rw_ios_per_sec": 0, 00:08:27.074 "rw_mbytes_per_sec": 0, 00:08:27.074 "r_mbytes_per_sec": 0, 00:08:27.074 "w_mbytes_per_sec": 0 00:08:27.074 }, 00:08:27.074 "claimed": false, 00:08:27.074 "zoned": false, 00:08:27.074 "supported_io_types": { 00:08:27.074 "read": true, 00:08:27.074 "write": true, 00:08:27.074 "unmap": true, 00:08:27.074 "flush": true, 00:08:27.074 "reset": true, 00:08:27.074 "nvme_admin": false, 00:08:27.074 "nvme_io": false, 00:08:27.074 "nvme_io_md": false, 00:08:27.074 "write_zeroes": true, 00:08:27.074 "zcopy": false, 00:08:27.074 "get_zone_info": false, 00:08:27.074 "zone_management": false, 00:08:27.074 "zone_append": false, 00:08:27.074 "compare": false, 00:08:27.074 "compare_and_write": false, 00:08:27.074 "abort": false, 00:08:27.074 "seek_hole": false, 00:08:27.074 "seek_data": false, 00:08:27.074 "copy": false, 00:08:27.074 "nvme_iov_md": false 00:08:27.074 }, 00:08:27.074 "memory_domains": [ 00:08:27.074 { 00:08:27.074 "dma_device_id": "system", 00:08:27.074 "dma_device_type": 1 00:08:27.074 }, 00:08:27.074 { 00:08:27.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.074 "dma_device_type": 2 00:08:27.074 }, 00:08:27.074 { 00:08:27.074 "dma_device_id": "system", 00:08:27.074 "dma_device_type": 1 00:08:27.074 }, 00:08:27.074 { 00:08:27.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.074 "dma_device_type": 2 00:08:27.074 } 00:08:27.074 ], 00:08:27.074 "driver_specific": { 00:08:27.074 "raid": { 00:08:27.074 "uuid": "4872aa33-6755-4af5-9619-6539e6614711", 00:08:27.074 "strip_size_kb": 64, 00:08:27.074 "state": "online", 00:08:27.074 "raid_level": "concat", 00:08:27.074 "superblock": true, 00:08:27.074 "num_base_bdevs": 2, 00:08:27.074 "num_base_bdevs_discovered": 2, 00:08:27.074 "num_base_bdevs_operational": 2, 00:08:27.074 "base_bdevs_list": [ 00:08:27.074 { 00:08:27.074 "name": "pt1", 00:08:27.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.074 "is_configured": true, 00:08:27.074 "data_offset": 2048, 00:08:27.074 "data_size": 63488 00:08:27.074 }, 00:08:27.074 { 00:08:27.074 "name": "pt2", 00:08:27.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.074 "is_configured": true, 00:08:27.074 "data_offset": 2048, 00:08:27.074 "data_size": 63488 00:08:27.075 } 00:08:27.075 ] 00:08:27.075 } 00:08:27.075 } 00:08:27.075 }' 00:08:27.075 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:27.334 pt2' 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.334 [2024-10-13 02:22:45.917693] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4872aa33-6755-4af5-9619-6539e6614711 '!=' 4872aa33-6755-4af5-9619-6539e6614711 ']' 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73406 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73406 ']' 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73406 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:27.334 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.335 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73406 00:08:27.335 killing process with pid 73406 00:08:27.335 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:27.335 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:27.335 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73406' 00:08:27.335 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73406 00:08:27.335 02:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73406 00:08:27.335 [2024-10-13 02:22:45.980935] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.335 [2024-10-13 02:22:45.981036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.335 [2024-10-13 02:22:45.981095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.335 [2024-10-13 02:22:45.981104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:27.335 [2024-10-13 02:22:46.004424] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:27.594 02:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:27.594 00:08:27.594 real 0m3.372s 00:08:27.594 user 0m5.168s 00:08:27.594 sys 0m0.734s 00:08:27.594 02:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.594 02:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.594 ************************************ 00:08:27.594 END TEST raid_superblock_test 00:08:27.594 ************************************ 00:08:27.853 02:22:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:27.853 02:22:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:27.853 02:22:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.853 02:22:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:27.853 ************************************ 00:08:27.853 START TEST raid_read_error_test 00:08:27.853 ************************************ 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SSFlugKBb1 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73601 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73601 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73601 ']' 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.853 02:22:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:27.853 [2024-10-13 02:22:46.415412] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:27.853 [2024-10-13 02:22:46.415592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73601 ] 00:08:28.113 [2024-10-13 02:22:46.559345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.113 [2024-10-13 02:22:46.609413] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.113 [2024-10-13 02:22:46.652796] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.113 [2024-10-13 02:22:46.652834] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.681 BaseBdev1_malloc 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.681 true 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.681 [2024-10-13 02:22:47.283481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:28.681 [2024-10-13 02:22:47.283586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.681 [2024-10-13 02:22:47.283614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:28.681 [2024-10-13 02:22:47.283623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.681 [2024-10-13 02:22:47.285769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.681 [2024-10-13 02:22:47.285810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:28.681 BaseBdev1 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.681 BaseBdev2_malloc 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.681 true 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.681 [2024-10-13 02:22:47.340473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:28.681 [2024-10-13 02:22:47.340556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.681 [2024-10-13 02:22:47.340592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:28.681 [2024-10-13 02:22:47.340607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.681 [2024-10-13 02:22:47.343640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.681 [2024-10-13 02:22:47.343683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:28.681 BaseBdev2 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.681 [2024-10-13 02:22:47.352503] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.681 [2024-10-13 02:22:47.354492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.681 [2024-10-13 02:22:47.354705] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:28.681 [2024-10-13 02:22:47.354753] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:28.681 [2024-10-13 02:22:47.355026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:28.681 [2024-10-13 02:22:47.355186] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:28.681 [2024-10-13 02:22:47.355231] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:28.681 [2024-10-13 02:22:47.355404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.681 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.941 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.941 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.941 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.941 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.941 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.941 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.941 "name": "raid_bdev1", 00:08:28.941 "uuid": "05e80e87-c8d1-4d2c-ac2d-0cc6eb469353", 00:08:28.941 "strip_size_kb": 64, 00:08:28.941 "state": "online", 00:08:28.941 "raid_level": "concat", 00:08:28.941 "superblock": true, 00:08:28.941 "num_base_bdevs": 2, 00:08:28.941 "num_base_bdevs_discovered": 2, 00:08:28.941 "num_base_bdevs_operational": 2, 00:08:28.941 "base_bdevs_list": [ 00:08:28.941 { 00:08:28.941 "name": "BaseBdev1", 00:08:28.941 "uuid": "d45c62bc-27fe-5042-a59d-b1ba9a0c52c4", 00:08:28.941 "is_configured": true, 00:08:28.941 "data_offset": 2048, 00:08:28.941 "data_size": 63488 00:08:28.941 }, 00:08:28.941 { 00:08:28.941 "name": "BaseBdev2", 00:08:28.941 "uuid": "0843d2bd-6240-5099-a2cf-345a661cbb14", 00:08:28.941 "is_configured": true, 00:08:28.941 "data_offset": 2048, 00:08:28.941 "data_size": 63488 00:08:28.941 } 00:08:28.941 ] 00:08:28.941 }' 00:08:28.941 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.941 02:22:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.200 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:29.200 02:22:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:29.458 [2024-10-13 02:22:47.892060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.394 "name": "raid_bdev1", 00:08:30.394 "uuid": "05e80e87-c8d1-4d2c-ac2d-0cc6eb469353", 00:08:30.394 "strip_size_kb": 64, 00:08:30.394 "state": "online", 00:08:30.394 "raid_level": "concat", 00:08:30.394 "superblock": true, 00:08:30.394 "num_base_bdevs": 2, 00:08:30.394 "num_base_bdevs_discovered": 2, 00:08:30.394 "num_base_bdevs_operational": 2, 00:08:30.394 "base_bdevs_list": [ 00:08:30.394 { 00:08:30.394 "name": "BaseBdev1", 00:08:30.394 "uuid": "d45c62bc-27fe-5042-a59d-b1ba9a0c52c4", 00:08:30.394 "is_configured": true, 00:08:30.394 "data_offset": 2048, 00:08:30.394 "data_size": 63488 00:08:30.394 }, 00:08:30.394 { 00:08:30.394 "name": "BaseBdev2", 00:08:30.394 "uuid": "0843d2bd-6240-5099-a2cf-345a661cbb14", 00:08:30.394 "is_configured": true, 00:08:30.394 "data_offset": 2048, 00:08:30.394 "data_size": 63488 00:08:30.394 } 00:08:30.394 ] 00:08:30.394 }' 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.394 02:22:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.654 02:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:30.654 02:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.654 02:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.654 [2024-10-13 02:22:49.271793] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:30.654 [2024-10-13 02:22:49.271835] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.654 [2024-10-13 02:22:49.274287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.654 [2024-10-13 02:22:49.274337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.654 [2024-10-13 02:22:49.274369] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.654 [2024-10-13 02:22:49.274378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:30.654 { 00:08:30.654 "results": [ 00:08:30.654 { 00:08:30.654 "job": "raid_bdev1", 00:08:30.654 "core_mask": "0x1", 00:08:30.654 "workload": "randrw", 00:08:30.654 "percentage": 50, 00:08:30.654 "status": "finished", 00:08:30.654 "queue_depth": 1, 00:08:30.654 "io_size": 131072, 00:08:30.654 "runtime": 1.380593, 00:08:30.654 "iops": 17412.083068652384, 00:08:30.654 "mibps": 2176.510383581548, 00:08:30.654 "io_failed": 1, 00:08:30.654 "io_timeout": 0, 00:08:30.654 "avg_latency_us": 79.47651032849181, 00:08:30.654 "min_latency_us": 25.2646288209607, 00:08:30.654 "max_latency_us": 1366.5257641921398 00:08:30.654 } 00:08:30.654 ], 00:08:30.654 "core_count": 1 00:08:30.654 } 00:08:30.654 02:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.654 02:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73601 00:08:30.654 02:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73601 ']' 00:08:30.654 02:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73601 00:08:30.654 02:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:30.654 02:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.654 02:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73601 00:08:30.654 killing process with pid 73601 00:08:30.654 02:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.654 02:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.654 02:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73601' 00:08:30.654 02:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73601 00:08:30.654 [2024-10-13 02:22:49.319557] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.654 02:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73601 00:08:30.654 [2024-10-13 02:22:49.335293] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.914 02:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SSFlugKBb1 00:08:30.914 02:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:30.914 02:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:30.914 02:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:30.914 02:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:30.914 ************************************ 00:08:30.914 END TEST raid_read_error_test 00:08:30.914 ************************************ 00:08:30.914 02:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:30.914 02:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:30.914 02:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:30.914 00:08:30.914 real 0m3.259s 00:08:30.914 user 0m4.135s 00:08:30.914 sys 0m0.521s 00:08:30.914 02:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.914 02:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.174 02:22:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:31.174 02:22:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:31.174 02:22:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.174 02:22:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.174 ************************************ 00:08:31.174 START TEST raid_write_error_test 00:08:31.174 ************************************ 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2yMPHatNBB 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73730 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73730 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73730 ']' 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.174 02:22:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.174 [2024-10-13 02:22:49.746010] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:31.174 [2024-10-13 02:22:49.746195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73730 ] 00:08:31.434 [2024-10-13 02:22:49.889857] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.434 [2024-10-13 02:22:49.938223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.434 [2024-10-13 02:22:49.981115] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.434 [2024-10-13 02:22:49.981235] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.004 BaseBdev1_malloc 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.004 true 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.004 [2024-10-13 02:22:50.628044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:32.004 [2024-10-13 02:22:50.628205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.004 [2024-10-13 02:22:50.628247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:32.004 [2024-10-13 02:22:50.628279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.004 [2024-10-13 02:22:50.630456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.004 [2024-10-13 02:22:50.630533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:32.004 BaseBdev1 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.004 BaseBdev2_malloc 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.004 true 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.004 [2024-10-13 02:22:50.678937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:32.004 [2024-10-13 02:22:50.679102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.004 [2024-10-13 02:22:50.679162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:32.004 [2024-10-13 02:22:50.679191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.004 [2024-10-13 02:22:50.681370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.004 [2024-10-13 02:22:50.681447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:32.004 BaseBdev2 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.004 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.264 [2024-10-13 02:22:50.691001] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.264 [2024-10-13 02:22:50.692907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:32.265 [2024-10-13 02:22:50.693131] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:32.265 [2024-10-13 02:22:50.693169] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:32.265 [2024-10-13 02:22:50.693498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:32.265 [2024-10-13 02:22:50.693664] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:32.265 [2024-10-13 02:22:50.693688] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:32.265 [2024-10-13 02:22:50.693832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.265 "name": "raid_bdev1", 00:08:32.265 "uuid": "71ca86a8-dff9-4d0e-b4ce-f29f92ea8d25", 00:08:32.265 "strip_size_kb": 64, 00:08:32.265 "state": "online", 00:08:32.265 "raid_level": "concat", 00:08:32.265 "superblock": true, 00:08:32.265 "num_base_bdevs": 2, 00:08:32.265 "num_base_bdevs_discovered": 2, 00:08:32.265 "num_base_bdevs_operational": 2, 00:08:32.265 "base_bdevs_list": [ 00:08:32.265 { 00:08:32.265 "name": "BaseBdev1", 00:08:32.265 "uuid": "497b8730-26f7-5d49-b19e-99989dc297d3", 00:08:32.265 "is_configured": true, 00:08:32.265 "data_offset": 2048, 00:08:32.265 "data_size": 63488 00:08:32.265 }, 00:08:32.265 { 00:08:32.265 "name": "BaseBdev2", 00:08:32.265 "uuid": "58dbcc10-f590-5817-a79e-8eb8dfec88bb", 00:08:32.265 "is_configured": true, 00:08:32.265 "data_offset": 2048, 00:08:32.265 "data_size": 63488 00:08:32.265 } 00:08:32.265 ] 00:08:32.265 }' 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.265 02:22:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.525 02:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:32.525 02:22:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:32.525 [2024-10-13 02:22:51.194498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:33.464 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:33.464 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.464 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.464 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.464 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:33.464 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:33.464 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:33.464 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:33.464 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.464 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.464 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.464 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.464 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.464 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.464 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.464 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.464 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.464 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.724 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.724 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.724 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.724 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.724 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.724 "name": "raid_bdev1", 00:08:33.724 "uuid": "71ca86a8-dff9-4d0e-b4ce-f29f92ea8d25", 00:08:33.724 "strip_size_kb": 64, 00:08:33.724 "state": "online", 00:08:33.724 "raid_level": "concat", 00:08:33.724 "superblock": true, 00:08:33.724 "num_base_bdevs": 2, 00:08:33.724 "num_base_bdevs_discovered": 2, 00:08:33.724 "num_base_bdevs_operational": 2, 00:08:33.724 "base_bdevs_list": [ 00:08:33.724 { 00:08:33.724 "name": "BaseBdev1", 00:08:33.724 "uuid": "497b8730-26f7-5d49-b19e-99989dc297d3", 00:08:33.724 "is_configured": true, 00:08:33.724 "data_offset": 2048, 00:08:33.724 "data_size": 63488 00:08:33.724 }, 00:08:33.724 { 00:08:33.724 "name": "BaseBdev2", 00:08:33.724 "uuid": "58dbcc10-f590-5817-a79e-8eb8dfec88bb", 00:08:33.724 "is_configured": true, 00:08:33.724 "data_offset": 2048, 00:08:33.724 "data_size": 63488 00:08:33.724 } 00:08:33.724 ] 00:08:33.724 }' 00:08:33.724 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.724 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.984 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:33.984 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.984 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.984 [2024-10-13 02:22:52.542289] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.984 [2024-10-13 02:22:52.542415] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.984 { 00:08:33.984 "results": [ 00:08:33.984 { 00:08:33.984 "job": "raid_bdev1", 00:08:33.984 "core_mask": "0x1", 00:08:33.984 "workload": "randrw", 00:08:33.984 "percentage": 50, 00:08:33.984 "status": "finished", 00:08:33.984 "queue_depth": 1, 00:08:33.984 "io_size": 131072, 00:08:33.984 "runtime": 1.348681, 00:08:33.984 "iops": 16846.830347576633, 00:08:33.984 "mibps": 2105.853793447079, 00:08:33.984 "io_failed": 1, 00:08:33.984 "io_timeout": 0, 00:08:33.984 "avg_latency_us": 82.1522836302389, 00:08:33.984 "min_latency_us": 25.2646288209607, 00:08:33.984 "max_latency_us": 1552.5449781659388 00:08:33.984 } 00:08:33.984 ], 00:08:33.984 "core_count": 1 00:08:33.984 } 00:08:33.984 [2024-10-13 02:22:52.544950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.984 [2024-10-13 02:22:52.544993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.984 [2024-10-13 02:22:52.545026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.984 [2024-10-13 02:22:52.545035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:33.984 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.984 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73730 00:08:33.984 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73730 ']' 00:08:33.984 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73730 00:08:33.984 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:33.984 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:33.984 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73730 00:08:33.984 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:33.984 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:33.984 killing process with pid 73730 00:08:33.984 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73730' 00:08:33.984 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73730 00:08:33.984 [2024-10-13 02:22:52.592722] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:33.984 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73730 00:08:33.984 [2024-10-13 02:22:52.608705] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:34.244 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2yMPHatNBB 00:08:34.244 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:34.244 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:34.244 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:34.244 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:34.244 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:34.244 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:34.244 02:22:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:34.244 00:08:34.244 real 0m3.204s 00:08:34.244 user 0m4.023s 00:08:34.244 sys 0m0.501s 00:08:34.244 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.244 02:22:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.244 ************************************ 00:08:34.244 END TEST raid_write_error_test 00:08:34.244 ************************************ 00:08:34.244 02:22:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:34.244 02:22:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:34.244 02:22:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:34.244 02:22:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.244 02:22:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:34.244 ************************************ 00:08:34.244 START TEST raid_state_function_test 00:08:34.244 ************************************ 00:08:34.244 02:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73857 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73857' 00:08:34.504 Process raid pid: 73857 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73857 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73857 ']' 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.504 02:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.504 [2024-10-13 02:22:53.013614] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:34.504 [2024-10-13 02:22:53.013782] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.504 [2024-10-13 02:22:53.159426] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.764 [2024-10-13 02:22:53.206471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.764 [2024-10-13 02:22:53.249987] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.764 [2024-10-13 02:22:53.250109] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.333 [2024-10-13 02:22:53.872113] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:35.333 [2024-10-13 02:22:53.872266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:35.333 [2024-10-13 02:22:53.872300] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.333 [2024-10-13 02:22:53.872325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.333 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.334 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.334 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.334 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.334 "name": "Existed_Raid", 00:08:35.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.334 "strip_size_kb": 0, 00:08:35.334 "state": "configuring", 00:08:35.334 "raid_level": "raid1", 00:08:35.334 "superblock": false, 00:08:35.334 "num_base_bdevs": 2, 00:08:35.334 "num_base_bdevs_discovered": 0, 00:08:35.334 "num_base_bdevs_operational": 2, 00:08:35.334 "base_bdevs_list": [ 00:08:35.334 { 00:08:35.334 "name": "BaseBdev1", 00:08:35.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.334 "is_configured": false, 00:08:35.334 "data_offset": 0, 00:08:35.334 "data_size": 0 00:08:35.334 }, 00:08:35.334 { 00:08:35.334 "name": "BaseBdev2", 00:08:35.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.334 "is_configured": false, 00:08:35.334 "data_offset": 0, 00:08:35.334 "data_size": 0 00:08:35.334 } 00:08:35.334 ] 00:08:35.334 }' 00:08:35.334 02:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.334 02:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.904 [2024-10-13 02:22:54.287289] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:35.904 [2024-10-13 02:22:54.287392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.904 [2024-10-13 02:22:54.299263] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:35.904 [2024-10-13 02:22:54.299354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:35.904 [2024-10-13 02:22:54.299394] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.904 [2024-10-13 02:22:54.299417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.904 [2024-10-13 02:22:54.320405] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.904 BaseBdev1 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.904 [ 00:08:35.904 { 00:08:35.904 "name": "BaseBdev1", 00:08:35.904 "aliases": [ 00:08:35.904 "b1bbe238-6ffc-46a4-bf78-ec1665b5e808" 00:08:35.904 ], 00:08:35.904 "product_name": "Malloc disk", 00:08:35.904 "block_size": 512, 00:08:35.904 "num_blocks": 65536, 00:08:35.904 "uuid": "b1bbe238-6ffc-46a4-bf78-ec1665b5e808", 00:08:35.904 "assigned_rate_limits": { 00:08:35.904 "rw_ios_per_sec": 0, 00:08:35.904 "rw_mbytes_per_sec": 0, 00:08:35.904 "r_mbytes_per_sec": 0, 00:08:35.904 "w_mbytes_per_sec": 0 00:08:35.904 }, 00:08:35.904 "claimed": true, 00:08:35.904 "claim_type": "exclusive_write", 00:08:35.904 "zoned": false, 00:08:35.904 "supported_io_types": { 00:08:35.904 "read": true, 00:08:35.904 "write": true, 00:08:35.904 "unmap": true, 00:08:35.904 "flush": true, 00:08:35.904 "reset": true, 00:08:35.904 "nvme_admin": false, 00:08:35.904 "nvme_io": false, 00:08:35.904 "nvme_io_md": false, 00:08:35.904 "write_zeroes": true, 00:08:35.904 "zcopy": true, 00:08:35.904 "get_zone_info": false, 00:08:35.904 "zone_management": false, 00:08:35.904 "zone_append": false, 00:08:35.904 "compare": false, 00:08:35.904 "compare_and_write": false, 00:08:35.904 "abort": true, 00:08:35.904 "seek_hole": false, 00:08:35.904 "seek_data": false, 00:08:35.904 "copy": true, 00:08:35.904 "nvme_iov_md": false 00:08:35.904 }, 00:08:35.904 "memory_domains": [ 00:08:35.904 { 00:08:35.904 "dma_device_id": "system", 00:08:35.904 "dma_device_type": 1 00:08:35.904 }, 00:08:35.904 { 00:08:35.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.904 "dma_device_type": 2 00:08:35.904 } 00:08:35.904 ], 00:08:35.904 "driver_specific": {} 00:08:35.904 } 00:08:35.904 ] 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.904 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.904 "name": "Existed_Raid", 00:08:35.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.904 "strip_size_kb": 0, 00:08:35.904 "state": "configuring", 00:08:35.904 "raid_level": "raid1", 00:08:35.904 "superblock": false, 00:08:35.904 "num_base_bdevs": 2, 00:08:35.904 "num_base_bdevs_discovered": 1, 00:08:35.904 "num_base_bdevs_operational": 2, 00:08:35.905 "base_bdevs_list": [ 00:08:35.905 { 00:08:35.905 "name": "BaseBdev1", 00:08:35.905 "uuid": "b1bbe238-6ffc-46a4-bf78-ec1665b5e808", 00:08:35.905 "is_configured": true, 00:08:35.905 "data_offset": 0, 00:08:35.905 "data_size": 65536 00:08:35.905 }, 00:08:35.905 { 00:08:35.905 "name": "BaseBdev2", 00:08:35.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.905 "is_configured": false, 00:08:35.905 "data_offset": 0, 00:08:35.905 "data_size": 0 00:08:35.905 } 00:08:35.905 ] 00:08:35.905 }' 00:08:35.905 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.905 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.165 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:36.165 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.165 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.165 [2024-10-13 02:22:54.827580] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:36.165 [2024-10-13 02:22:54.827704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:36.165 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.165 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:36.165 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.165 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.165 [2024-10-13 02:22:54.839586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.165 [2024-10-13 02:22:54.841448] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:36.165 [2024-10-13 02:22:54.841525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:36.165 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.165 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.425 "name": "Existed_Raid", 00:08:36.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.425 "strip_size_kb": 0, 00:08:36.425 "state": "configuring", 00:08:36.425 "raid_level": "raid1", 00:08:36.425 "superblock": false, 00:08:36.425 "num_base_bdevs": 2, 00:08:36.425 "num_base_bdevs_discovered": 1, 00:08:36.425 "num_base_bdevs_operational": 2, 00:08:36.425 "base_bdevs_list": [ 00:08:36.425 { 00:08:36.425 "name": "BaseBdev1", 00:08:36.425 "uuid": "b1bbe238-6ffc-46a4-bf78-ec1665b5e808", 00:08:36.425 "is_configured": true, 00:08:36.425 "data_offset": 0, 00:08:36.425 "data_size": 65536 00:08:36.425 }, 00:08:36.425 { 00:08:36.425 "name": "BaseBdev2", 00:08:36.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.425 "is_configured": false, 00:08:36.425 "data_offset": 0, 00:08:36.425 "data_size": 0 00:08:36.425 } 00:08:36.425 ] 00:08:36.425 }' 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.425 02:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.686 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:36.686 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.686 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.686 [2024-10-13 02:22:55.340649] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.686 [2024-10-13 02:22:55.340817] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:36.686 [2024-10-13 02:22:55.340865] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:36.686 [2024-10-13 02:22:55.341275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:36.686 [2024-10-13 02:22:55.341519] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:36.686 [2024-10-13 02:22:55.341580] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:36.686 [2024-10-13 02:22:55.341903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.686 BaseBdev2 00:08:36.686 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.686 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:36.686 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:36.686 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:36.686 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:36.686 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:36.686 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:36.686 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:36.686 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.686 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.686 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.686 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:36.686 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.686 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.946 [ 00:08:36.946 { 00:08:36.946 "name": "BaseBdev2", 00:08:36.946 "aliases": [ 00:08:36.946 "6295549e-f13f-4c55-920b-1dd2651906a1" 00:08:36.946 ], 00:08:36.946 "product_name": "Malloc disk", 00:08:36.946 "block_size": 512, 00:08:36.946 "num_blocks": 65536, 00:08:36.946 "uuid": "6295549e-f13f-4c55-920b-1dd2651906a1", 00:08:36.946 "assigned_rate_limits": { 00:08:36.946 "rw_ios_per_sec": 0, 00:08:36.946 "rw_mbytes_per_sec": 0, 00:08:36.946 "r_mbytes_per_sec": 0, 00:08:36.946 "w_mbytes_per_sec": 0 00:08:36.946 }, 00:08:36.946 "claimed": true, 00:08:36.946 "claim_type": "exclusive_write", 00:08:36.946 "zoned": false, 00:08:36.946 "supported_io_types": { 00:08:36.946 "read": true, 00:08:36.946 "write": true, 00:08:36.946 "unmap": true, 00:08:36.946 "flush": true, 00:08:36.946 "reset": true, 00:08:36.946 "nvme_admin": false, 00:08:36.946 "nvme_io": false, 00:08:36.946 "nvme_io_md": false, 00:08:36.946 "write_zeroes": true, 00:08:36.946 "zcopy": true, 00:08:36.946 "get_zone_info": false, 00:08:36.946 "zone_management": false, 00:08:36.946 "zone_append": false, 00:08:36.946 "compare": false, 00:08:36.946 "compare_and_write": false, 00:08:36.946 "abort": true, 00:08:36.946 "seek_hole": false, 00:08:36.946 "seek_data": false, 00:08:36.946 "copy": true, 00:08:36.946 "nvme_iov_md": false 00:08:36.946 }, 00:08:36.946 "memory_domains": [ 00:08:36.946 { 00:08:36.946 "dma_device_id": "system", 00:08:36.946 "dma_device_type": 1 00:08:36.946 }, 00:08:36.946 { 00:08:36.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.946 "dma_device_type": 2 00:08:36.946 } 00:08:36.946 ], 00:08:36.946 "driver_specific": {} 00:08:36.946 } 00:08:36.946 ] 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.947 "name": "Existed_Raid", 00:08:36.947 "uuid": "223d8762-cad6-44d8-8367-7def677bdd20", 00:08:36.947 "strip_size_kb": 0, 00:08:36.947 "state": "online", 00:08:36.947 "raid_level": "raid1", 00:08:36.947 "superblock": false, 00:08:36.947 "num_base_bdevs": 2, 00:08:36.947 "num_base_bdevs_discovered": 2, 00:08:36.947 "num_base_bdevs_operational": 2, 00:08:36.947 "base_bdevs_list": [ 00:08:36.947 { 00:08:36.947 "name": "BaseBdev1", 00:08:36.947 "uuid": "b1bbe238-6ffc-46a4-bf78-ec1665b5e808", 00:08:36.947 "is_configured": true, 00:08:36.947 "data_offset": 0, 00:08:36.947 "data_size": 65536 00:08:36.947 }, 00:08:36.947 { 00:08:36.947 "name": "BaseBdev2", 00:08:36.947 "uuid": "6295549e-f13f-4c55-920b-1dd2651906a1", 00:08:36.947 "is_configured": true, 00:08:36.947 "data_offset": 0, 00:08:36.947 "data_size": 65536 00:08:36.947 } 00:08:36.947 ] 00:08:36.947 }' 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.947 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.207 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:37.207 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:37.207 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:37.207 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:37.207 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:37.207 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:37.207 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:37.207 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:37.207 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.207 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.207 [2024-10-13 02:22:55.864085] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.207 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.207 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:37.207 "name": "Existed_Raid", 00:08:37.207 "aliases": [ 00:08:37.207 "223d8762-cad6-44d8-8367-7def677bdd20" 00:08:37.207 ], 00:08:37.207 "product_name": "Raid Volume", 00:08:37.207 "block_size": 512, 00:08:37.207 "num_blocks": 65536, 00:08:37.207 "uuid": "223d8762-cad6-44d8-8367-7def677bdd20", 00:08:37.207 "assigned_rate_limits": { 00:08:37.207 "rw_ios_per_sec": 0, 00:08:37.207 "rw_mbytes_per_sec": 0, 00:08:37.207 "r_mbytes_per_sec": 0, 00:08:37.207 "w_mbytes_per_sec": 0 00:08:37.207 }, 00:08:37.207 "claimed": false, 00:08:37.207 "zoned": false, 00:08:37.207 "supported_io_types": { 00:08:37.207 "read": true, 00:08:37.207 "write": true, 00:08:37.207 "unmap": false, 00:08:37.207 "flush": false, 00:08:37.207 "reset": true, 00:08:37.207 "nvme_admin": false, 00:08:37.207 "nvme_io": false, 00:08:37.207 "nvme_io_md": false, 00:08:37.207 "write_zeroes": true, 00:08:37.207 "zcopy": false, 00:08:37.207 "get_zone_info": false, 00:08:37.207 "zone_management": false, 00:08:37.207 "zone_append": false, 00:08:37.207 "compare": false, 00:08:37.207 "compare_and_write": false, 00:08:37.207 "abort": false, 00:08:37.207 "seek_hole": false, 00:08:37.207 "seek_data": false, 00:08:37.207 "copy": false, 00:08:37.207 "nvme_iov_md": false 00:08:37.207 }, 00:08:37.207 "memory_domains": [ 00:08:37.207 { 00:08:37.207 "dma_device_id": "system", 00:08:37.207 "dma_device_type": 1 00:08:37.207 }, 00:08:37.207 { 00:08:37.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.207 "dma_device_type": 2 00:08:37.207 }, 00:08:37.207 { 00:08:37.207 "dma_device_id": "system", 00:08:37.207 "dma_device_type": 1 00:08:37.207 }, 00:08:37.207 { 00:08:37.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.207 "dma_device_type": 2 00:08:37.207 } 00:08:37.207 ], 00:08:37.207 "driver_specific": { 00:08:37.207 "raid": { 00:08:37.207 "uuid": "223d8762-cad6-44d8-8367-7def677bdd20", 00:08:37.207 "strip_size_kb": 0, 00:08:37.207 "state": "online", 00:08:37.207 "raid_level": "raid1", 00:08:37.207 "superblock": false, 00:08:37.207 "num_base_bdevs": 2, 00:08:37.207 "num_base_bdevs_discovered": 2, 00:08:37.207 "num_base_bdevs_operational": 2, 00:08:37.207 "base_bdevs_list": [ 00:08:37.207 { 00:08:37.207 "name": "BaseBdev1", 00:08:37.207 "uuid": "b1bbe238-6ffc-46a4-bf78-ec1665b5e808", 00:08:37.207 "is_configured": true, 00:08:37.207 "data_offset": 0, 00:08:37.207 "data_size": 65536 00:08:37.207 }, 00:08:37.207 { 00:08:37.207 "name": "BaseBdev2", 00:08:37.207 "uuid": "6295549e-f13f-4c55-920b-1dd2651906a1", 00:08:37.207 "is_configured": true, 00:08:37.207 "data_offset": 0, 00:08:37.207 "data_size": 65536 00:08:37.207 } 00:08:37.207 ] 00:08:37.207 } 00:08:37.207 } 00:08:37.207 }' 00:08:37.207 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:37.468 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:37.468 BaseBdev2' 00:08:37.468 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.468 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:37.468 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.468 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:37.468 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.468 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.468 02:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.468 02:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.468 [2024-10-13 02:22:56.059494] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.468 "name": "Existed_Raid", 00:08:37.468 "uuid": "223d8762-cad6-44d8-8367-7def677bdd20", 00:08:37.468 "strip_size_kb": 0, 00:08:37.468 "state": "online", 00:08:37.468 "raid_level": "raid1", 00:08:37.468 "superblock": false, 00:08:37.468 "num_base_bdevs": 2, 00:08:37.468 "num_base_bdevs_discovered": 1, 00:08:37.468 "num_base_bdevs_operational": 1, 00:08:37.468 "base_bdevs_list": [ 00:08:37.468 { 00:08:37.468 "name": null, 00:08:37.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.468 "is_configured": false, 00:08:37.468 "data_offset": 0, 00:08:37.468 "data_size": 65536 00:08:37.468 }, 00:08:37.468 { 00:08:37.468 "name": "BaseBdev2", 00:08:37.468 "uuid": "6295549e-f13f-4c55-920b-1dd2651906a1", 00:08:37.468 "is_configured": true, 00:08:37.468 "data_offset": 0, 00:08:37.468 "data_size": 65536 00:08:37.468 } 00:08:37.468 ] 00:08:37.468 }' 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.468 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.039 [2024-10-13 02:22:56.558244] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:38.039 [2024-10-13 02:22:56.558394] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.039 [2024-10-13 02:22:56.570114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.039 [2024-10-13 02:22:56.570241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.039 [2024-10-13 02:22:56.570282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73857 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73857 ']' 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73857 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73857 00:08:38.039 killing process with pid 73857 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73857' 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73857 00:08:38.039 [2024-10-13 02:22:56.666781] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.039 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73857 00:08:38.039 [2024-10-13 02:22:56.667751] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.299 02:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:38.299 00:08:38.299 real 0m3.981s 00:08:38.299 user 0m6.267s 00:08:38.299 sys 0m0.790s 00:08:38.299 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.299 02:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.299 ************************************ 00:08:38.299 END TEST raid_state_function_test 00:08:38.299 ************************************ 00:08:38.299 02:22:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:38.299 02:22:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:38.299 02:22:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.299 02:22:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:38.299 ************************************ 00:08:38.299 START TEST raid_state_function_test_sb 00:08:38.299 ************************************ 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74099 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74099' 00:08:38.560 Process raid pid: 74099 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74099 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74099 ']' 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.560 02:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.560 [2024-10-13 02:22:57.070850] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:38.560 [2024-10-13 02:22:57.071014] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.560 [2024-10-13 02:22:57.217738] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.820 [2024-10-13 02:22:57.265125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.820 [2024-10-13 02:22:57.308824] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.820 [2024-10-13 02:22:57.308868] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.422 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.422 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:39.422 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:39.422 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.422 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.422 [2024-10-13 02:22:57.898741] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.422 [2024-10-13 02:22:57.898891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.422 [2024-10-13 02:22:57.898926] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.422 [2024-10-13 02:22:57.898952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.422 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.422 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:39.422 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.422 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.422 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.422 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.422 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.422 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.422 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.422 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.422 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.423 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.423 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.423 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.423 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.423 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.423 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.423 "name": "Existed_Raid", 00:08:39.423 "uuid": "27bbfb2c-eada-4536-b799-f493ae9c03c7", 00:08:39.423 "strip_size_kb": 0, 00:08:39.423 "state": "configuring", 00:08:39.423 "raid_level": "raid1", 00:08:39.423 "superblock": true, 00:08:39.423 "num_base_bdevs": 2, 00:08:39.423 "num_base_bdevs_discovered": 0, 00:08:39.423 "num_base_bdevs_operational": 2, 00:08:39.423 "base_bdevs_list": [ 00:08:39.423 { 00:08:39.423 "name": "BaseBdev1", 00:08:39.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.423 "is_configured": false, 00:08:39.423 "data_offset": 0, 00:08:39.423 "data_size": 0 00:08:39.423 }, 00:08:39.423 { 00:08:39.423 "name": "BaseBdev2", 00:08:39.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.423 "is_configured": false, 00:08:39.423 "data_offset": 0, 00:08:39.423 "data_size": 0 00:08:39.423 } 00:08:39.423 ] 00:08:39.423 }' 00:08:39.423 02:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.423 02:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.682 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:39.682 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.682 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.682 [2024-10-13 02:22:58.349903] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.682 [2024-10-13 02:22:58.350017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:39.682 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.682 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:39.682 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.682 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.682 [2024-10-13 02:22:58.361870] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.682 [2024-10-13 02:22:58.361974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.682 [2024-10-13 02:22:58.362015] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.682 [2024-10-13 02:22:58.362038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.942 [2024-10-13 02:22:58.382764] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.942 BaseBdev1 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.942 [ 00:08:39.942 { 00:08:39.942 "name": "BaseBdev1", 00:08:39.942 "aliases": [ 00:08:39.942 "6ada6d6d-c09a-432d-8d63-b092c0093693" 00:08:39.942 ], 00:08:39.942 "product_name": "Malloc disk", 00:08:39.942 "block_size": 512, 00:08:39.942 "num_blocks": 65536, 00:08:39.942 "uuid": "6ada6d6d-c09a-432d-8d63-b092c0093693", 00:08:39.942 "assigned_rate_limits": { 00:08:39.942 "rw_ios_per_sec": 0, 00:08:39.942 "rw_mbytes_per_sec": 0, 00:08:39.942 "r_mbytes_per_sec": 0, 00:08:39.942 "w_mbytes_per_sec": 0 00:08:39.942 }, 00:08:39.942 "claimed": true, 00:08:39.942 "claim_type": "exclusive_write", 00:08:39.942 "zoned": false, 00:08:39.942 "supported_io_types": { 00:08:39.942 "read": true, 00:08:39.942 "write": true, 00:08:39.942 "unmap": true, 00:08:39.942 "flush": true, 00:08:39.942 "reset": true, 00:08:39.942 "nvme_admin": false, 00:08:39.942 "nvme_io": false, 00:08:39.942 "nvme_io_md": false, 00:08:39.942 "write_zeroes": true, 00:08:39.942 "zcopy": true, 00:08:39.942 "get_zone_info": false, 00:08:39.942 "zone_management": false, 00:08:39.942 "zone_append": false, 00:08:39.942 "compare": false, 00:08:39.942 "compare_and_write": false, 00:08:39.942 "abort": true, 00:08:39.942 "seek_hole": false, 00:08:39.942 "seek_data": false, 00:08:39.942 "copy": true, 00:08:39.942 "nvme_iov_md": false 00:08:39.942 }, 00:08:39.942 "memory_domains": [ 00:08:39.942 { 00:08:39.942 "dma_device_id": "system", 00:08:39.942 "dma_device_type": 1 00:08:39.942 }, 00:08:39.942 { 00:08:39.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.942 "dma_device_type": 2 00:08:39.942 } 00:08:39.942 ], 00:08:39.942 "driver_specific": {} 00:08:39.942 } 00:08:39.942 ] 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.942 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.942 "name": "Existed_Raid", 00:08:39.942 "uuid": "c666e6d1-4fcb-4a1f-bb3a-70fbb6baca0f", 00:08:39.942 "strip_size_kb": 0, 00:08:39.942 "state": "configuring", 00:08:39.942 "raid_level": "raid1", 00:08:39.942 "superblock": true, 00:08:39.942 "num_base_bdevs": 2, 00:08:39.942 "num_base_bdevs_discovered": 1, 00:08:39.942 "num_base_bdevs_operational": 2, 00:08:39.942 "base_bdevs_list": [ 00:08:39.942 { 00:08:39.942 "name": "BaseBdev1", 00:08:39.942 "uuid": "6ada6d6d-c09a-432d-8d63-b092c0093693", 00:08:39.942 "is_configured": true, 00:08:39.942 "data_offset": 2048, 00:08:39.942 "data_size": 63488 00:08:39.942 }, 00:08:39.942 { 00:08:39.942 "name": "BaseBdev2", 00:08:39.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.942 "is_configured": false, 00:08:39.942 "data_offset": 0, 00:08:39.942 "data_size": 0 00:08:39.942 } 00:08:39.942 ] 00:08:39.942 }' 00:08:39.943 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.943 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.202 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.202 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.202 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.202 [2024-10-13 02:22:58.865979] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.202 [2024-10-13 02:22:58.866075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:40.202 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.202 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:40.202 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.202 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.202 [2024-10-13 02:22:58.878036] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.203 [2024-10-13 02:22:58.879813] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.203 [2024-10-13 02:22:58.879857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.463 "name": "Existed_Raid", 00:08:40.463 "uuid": "3c5c0aab-dbcf-46d5-90c6-ac089f5b856d", 00:08:40.463 "strip_size_kb": 0, 00:08:40.463 "state": "configuring", 00:08:40.463 "raid_level": "raid1", 00:08:40.463 "superblock": true, 00:08:40.463 "num_base_bdevs": 2, 00:08:40.463 "num_base_bdevs_discovered": 1, 00:08:40.463 "num_base_bdevs_operational": 2, 00:08:40.463 "base_bdevs_list": [ 00:08:40.463 { 00:08:40.463 "name": "BaseBdev1", 00:08:40.463 "uuid": "6ada6d6d-c09a-432d-8d63-b092c0093693", 00:08:40.463 "is_configured": true, 00:08:40.463 "data_offset": 2048, 00:08:40.463 "data_size": 63488 00:08:40.463 }, 00:08:40.463 { 00:08:40.463 "name": "BaseBdev2", 00:08:40.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.463 "is_configured": false, 00:08:40.463 "data_offset": 0, 00:08:40.463 "data_size": 0 00:08:40.463 } 00:08:40.463 ] 00:08:40.463 }' 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.463 02:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.723 [2024-10-13 02:22:59.340718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.723 [2024-10-13 02:22:59.341095] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:40.723 [2024-10-13 02:22:59.341173] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:40.723 [2024-10-13 02:22:59.341596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:40.723 BaseBdev2 00:08:40.723 [2024-10-13 02:22:59.341889] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:40.723 [2024-10-13 02:22:59.341989] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:40.723 [2024-10-13 02:22:59.342218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.723 [ 00:08:40.723 { 00:08:40.723 "name": "BaseBdev2", 00:08:40.723 "aliases": [ 00:08:40.723 "ac014ad0-698c-486a-a42d-1c2876ec66f3" 00:08:40.723 ], 00:08:40.723 "product_name": "Malloc disk", 00:08:40.723 "block_size": 512, 00:08:40.723 "num_blocks": 65536, 00:08:40.723 "uuid": "ac014ad0-698c-486a-a42d-1c2876ec66f3", 00:08:40.723 "assigned_rate_limits": { 00:08:40.723 "rw_ios_per_sec": 0, 00:08:40.723 "rw_mbytes_per_sec": 0, 00:08:40.723 "r_mbytes_per_sec": 0, 00:08:40.723 "w_mbytes_per_sec": 0 00:08:40.723 }, 00:08:40.723 "claimed": true, 00:08:40.723 "claim_type": "exclusive_write", 00:08:40.723 "zoned": false, 00:08:40.723 "supported_io_types": { 00:08:40.723 "read": true, 00:08:40.723 "write": true, 00:08:40.723 "unmap": true, 00:08:40.723 "flush": true, 00:08:40.723 "reset": true, 00:08:40.723 "nvme_admin": false, 00:08:40.723 "nvme_io": false, 00:08:40.723 "nvme_io_md": false, 00:08:40.723 "write_zeroes": true, 00:08:40.723 "zcopy": true, 00:08:40.723 "get_zone_info": false, 00:08:40.723 "zone_management": false, 00:08:40.723 "zone_append": false, 00:08:40.723 "compare": false, 00:08:40.723 "compare_and_write": false, 00:08:40.723 "abort": true, 00:08:40.723 "seek_hole": false, 00:08:40.723 "seek_data": false, 00:08:40.723 "copy": true, 00:08:40.723 "nvme_iov_md": false 00:08:40.723 }, 00:08:40.723 "memory_domains": [ 00:08:40.723 { 00:08:40.723 "dma_device_id": "system", 00:08:40.723 "dma_device_type": 1 00:08:40.723 }, 00:08:40.723 { 00:08:40.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.723 "dma_device_type": 2 00:08:40.723 } 00:08:40.723 ], 00:08:40.723 "driver_specific": {} 00:08:40.723 } 00:08:40.723 ] 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.723 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.724 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.724 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.724 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.724 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.724 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.724 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.724 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.724 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.984 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.984 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.984 "name": "Existed_Raid", 00:08:40.984 "uuid": "3c5c0aab-dbcf-46d5-90c6-ac089f5b856d", 00:08:40.984 "strip_size_kb": 0, 00:08:40.984 "state": "online", 00:08:40.984 "raid_level": "raid1", 00:08:40.984 "superblock": true, 00:08:40.984 "num_base_bdevs": 2, 00:08:40.984 "num_base_bdevs_discovered": 2, 00:08:40.984 "num_base_bdevs_operational": 2, 00:08:40.984 "base_bdevs_list": [ 00:08:40.984 { 00:08:40.984 "name": "BaseBdev1", 00:08:40.984 "uuid": "6ada6d6d-c09a-432d-8d63-b092c0093693", 00:08:40.984 "is_configured": true, 00:08:40.984 "data_offset": 2048, 00:08:40.984 "data_size": 63488 00:08:40.984 }, 00:08:40.984 { 00:08:40.984 "name": "BaseBdev2", 00:08:40.984 "uuid": "ac014ad0-698c-486a-a42d-1c2876ec66f3", 00:08:40.984 "is_configured": true, 00:08:40.984 "data_offset": 2048, 00:08:40.984 "data_size": 63488 00:08:40.984 } 00:08:40.984 ] 00:08:40.984 }' 00:08:40.984 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.984 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.244 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:41.244 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:41.244 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.244 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.244 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.244 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.244 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:41.244 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.244 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.244 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.244 [2024-10-13 02:22:59.812244] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.244 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.244 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:41.244 "name": "Existed_Raid", 00:08:41.244 "aliases": [ 00:08:41.244 "3c5c0aab-dbcf-46d5-90c6-ac089f5b856d" 00:08:41.244 ], 00:08:41.244 "product_name": "Raid Volume", 00:08:41.244 "block_size": 512, 00:08:41.244 "num_blocks": 63488, 00:08:41.244 "uuid": "3c5c0aab-dbcf-46d5-90c6-ac089f5b856d", 00:08:41.244 "assigned_rate_limits": { 00:08:41.244 "rw_ios_per_sec": 0, 00:08:41.244 "rw_mbytes_per_sec": 0, 00:08:41.244 "r_mbytes_per_sec": 0, 00:08:41.244 "w_mbytes_per_sec": 0 00:08:41.244 }, 00:08:41.244 "claimed": false, 00:08:41.244 "zoned": false, 00:08:41.244 "supported_io_types": { 00:08:41.244 "read": true, 00:08:41.244 "write": true, 00:08:41.244 "unmap": false, 00:08:41.244 "flush": false, 00:08:41.244 "reset": true, 00:08:41.244 "nvme_admin": false, 00:08:41.244 "nvme_io": false, 00:08:41.244 "nvme_io_md": false, 00:08:41.244 "write_zeroes": true, 00:08:41.244 "zcopy": false, 00:08:41.244 "get_zone_info": false, 00:08:41.244 "zone_management": false, 00:08:41.244 "zone_append": false, 00:08:41.244 "compare": false, 00:08:41.244 "compare_and_write": false, 00:08:41.244 "abort": false, 00:08:41.244 "seek_hole": false, 00:08:41.244 "seek_data": false, 00:08:41.244 "copy": false, 00:08:41.244 "nvme_iov_md": false 00:08:41.244 }, 00:08:41.244 "memory_domains": [ 00:08:41.244 { 00:08:41.244 "dma_device_id": "system", 00:08:41.244 "dma_device_type": 1 00:08:41.244 }, 00:08:41.244 { 00:08:41.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.244 "dma_device_type": 2 00:08:41.244 }, 00:08:41.244 { 00:08:41.244 "dma_device_id": "system", 00:08:41.244 "dma_device_type": 1 00:08:41.244 }, 00:08:41.244 { 00:08:41.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.244 "dma_device_type": 2 00:08:41.244 } 00:08:41.244 ], 00:08:41.244 "driver_specific": { 00:08:41.244 "raid": { 00:08:41.244 "uuid": "3c5c0aab-dbcf-46d5-90c6-ac089f5b856d", 00:08:41.244 "strip_size_kb": 0, 00:08:41.244 "state": "online", 00:08:41.244 "raid_level": "raid1", 00:08:41.244 "superblock": true, 00:08:41.244 "num_base_bdevs": 2, 00:08:41.244 "num_base_bdevs_discovered": 2, 00:08:41.244 "num_base_bdevs_operational": 2, 00:08:41.244 "base_bdevs_list": [ 00:08:41.244 { 00:08:41.244 "name": "BaseBdev1", 00:08:41.244 "uuid": "6ada6d6d-c09a-432d-8d63-b092c0093693", 00:08:41.244 "is_configured": true, 00:08:41.244 "data_offset": 2048, 00:08:41.244 "data_size": 63488 00:08:41.244 }, 00:08:41.244 { 00:08:41.245 "name": "BaseBdev2", 00:08:41.245 "uuid": "ac014ad0-698c-486a-a42d-1c2876ec66f3", 00:08:41.245 "is_configured": true, 00:08:41.245 "data_offset": 2048, 00:08:41.245 "data_size": 63488 00:08:41.245 } 00:08:41.245 ] 00:08:41.245 } 00:08:41.245 } 00:08:41.245 }' 00:08:41.245 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.245 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:41.245 BaseBdev2' 00:08:41.245 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.505 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.505 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.505 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.505 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:41.505 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.505 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.505 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.505 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.505 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.505 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.505 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:41.505 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.505 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.505 02:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.505 02:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.505 [2024-10-13 02:23:00.011629] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.505 "name": "Existed_Raid", 00:08:41.505 "uuid": "3c5c0aab-dbcf-46d5-90c6-ac089f5b856d", 00:08:41.505 "strip_size_kb": 0, 00:08:41.505 "state": "online", 00:08:41.505 "raid_level": "raid1", 00:08:41.505 "superblock": true, 00:08:41.505 "num_base_bdevs": 2, 00:08:41.505 "num_base_bdevs_discovered": 1, 00:08:41.505 "num_base_bdevs_operational": 1, 00:08:41.505 "base_bdevs_list": [ 00:08:41.505 { 00:08:41.505 "name": null, 00:08:41.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.505 "is_configured": false, 00:08:41.505 "data_offset": 0, 00:08:41.505 "data_size": 63488 00:08:41.505 }, 00:08:41.505 { 00:08:41.505 "name": "BaseBdev2", 00:08:41.505 "uuid": "ac014ad0-698c-486a-a42d-1c2876ec66f3", 00:08:41.505 "is_configured": true, 00:08:41.505 "data_offset": 2048, 00:08:41.505 "data_size": 63488 00:08:41.505 } 00:08:41.505 ] 00:08:41.505 }' 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.505 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.765 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:41.765 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.765 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:41.765 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.765 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.765 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.765 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.025 [2024-10-13 02:23:00.462343] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:42.025 [2024-10-13 02:23:00.462500] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.025 [2024-10-13 02:23:00.474137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.025 [2024-10-13 02:23:00.474262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.025 [2024-10-13 02:23:00.474304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74099 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74099 ']' 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74099 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74099 00:08:42.025 killing process with pid 74099 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74099' 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74099 00:08:42.025 [2024-10-13 02:23:00.552058] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.025 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74099 00:08:42.025 [2024-10-13 02:23:00.553029] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:42.285 02:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:42.285 00:08:42.285 real 0m3.802s 00:08:42.285 user 0m5.927s 00:08:42.285 sys 0m0.790s 00:08:42.285 ************************************ 00:08:42.285 END TEST raid_state_function_test_sb 00:08:42.285 ************************************ 00:08:42.285 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.285 02:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.285 02:23:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:42.285 02:23:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:42.285 02:23:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.285 02:23:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:42.285 ************************************ 00:08:42.285 START TEST raid_superblock_test 00:08:42.285 ************************************ 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74340 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74340 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74340 ']' 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.285 02:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.285 [2024-10-13 02:23:00.940426] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:42.285 [2024-10-13 02:23:00.940633] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74340 ] 00:08:42.544 [2024-10-13 02:23:01.079745] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.544 [2024-10-13 02:23:01.126929] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.544 [2024-10-13 02:23:01.170296] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.544 [2024-10-13 02:23:01.170414] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.113 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.113 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:43.113 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:43.113 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:43.113 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:43.113 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:43.113 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:43.113 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:43.113 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:43.113 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:43.113 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:43.113 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.113 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.373 malloc1 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.373 [2024-10-13 02:23:01.813493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:43.373 [2024-10-13 02:23:01.813642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.373 [2024-10-13 02:23:01.813679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:43.373 [2024-10-13 02:23:01.813725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.373 [2024-10-13 02:23:01.815837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.373 [2024-10-13 02:23:01.815929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:43.373 pt1 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.373 malloc2 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.373 [2024-10-13 02:23:01.856799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:43.373 [2024-10-13 02:23:01.857115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.373 [2024-10-13 02:23:01.857242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:43.373 [2024-10-13 02:23:01.857352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.373 [2024-10-13 02:23:01.861478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.373 [2024-10-13 02:23:01.861610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:43.373 pt2 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.373 [2024-10-13 02:23:01.869916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:43.373 [2024-10-13 02:23:01.872354] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:43.373 [2024-10-13 02:23:01.872582] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:43.373 [2024-10-13 02:23:01.872653] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:43.373 [2024-10-13 02:23:01.873039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:43.373 [2024-10-13 02:23:01.873268] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:43.373 [2024-10-13 02:23:01.873321] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:43.373 [2024-10-13 02:23:01.873519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.373 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:43.374 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.374 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.374 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.374 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.374 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.374 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.374 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.374 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.374 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.374 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.374 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.374 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.374 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.374 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.374 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.374 "name": "raid_bdev1", 00:08:43.374 "uuid": "628565ed-20e4-4326-86c2-b2234a8ef9d3", 00:08:43.374 "strip_size_kb": 0, 00:08:43.374 "state": "online", 00:08:43.374 "raid_level": "raid1", 00:08:43.374 "superblock": true, 00:08:43.374 "num_base_bdevs": 2, 00:08:43.374 "num_base_bdevs_discovered": 2, 00:08:43.374 "num_base_bdevs_operational": 2, 00:08:43.374 "base_bdevs_list": [ 00:08:43.374 { 00:08:43.374 "name": "pt1", 00:08:43.374 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.374 "is_configured": true, 00:08:43.374 "data_offset": 2048, 00:08:43.374 "data_size": 63488 00:08:43.374 }, 00:08:43.374 { 00:08:43.374 "name": "pt2", 00:08:43.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.374 "is_configured": true, 00:08:43.374 "data_offset": 2048, 00:08:43.374 "data_size": 63488 00:08:43.374 } 00:08:43.374 ] 00:08:43.374 }' 00:08:43.374 02:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.374 02:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.634 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:43.634 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:43.634 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:43.634 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:43.634 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:43.634 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:43.634 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:43.634 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.634 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.634 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.634 [2024-10-13 02:23:02.285436] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.634 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.634 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:43.634 "name": "raid_bdev1", 00:08:43.634 "aliases": [ 00:08:43.634 "628565ed-20e4-4326-86c2-b2234a8ef9d3" 00:08:43.634 ], 00:08:43.634 "product_name": "Raid Volume", 00:08:43.634 "block_size": 512, 00:08:43.634 "num_blocks": 63488, 00:08:43.634 "uuid": "628565ed-20e4-4326-86c2-b2234a8ef9d3", 00:08:43.634 "assigned_rate_limits": { 00:08:43.634 "rw_ios_per_sec": 0, 00:08:43.634 "rw_mbytes_per_sec": 0, 00:08:43.634 "r_mbytes_per_sec": 0, 00:08:43.634 "w_mbytes_per_sec": 0 00:08:43.634 }, 00:08:43.634 "claimed": false, 00:08:43.634 "zoned": false, 00:08:43.634 "supported_io_types": { 00:08:43.634 "read": true, 00:08:43.634 "write": true, 00:08:43.634 "unmap": false, 00:08:43.634 "flush": false, 00:08:43.634 "reset": true, 00:08:43.634 "nvme_admin": false, 00:08:43.634 "nvme_io": false, 00:08:43.634 "nvme_io_md": false, 00:08:43.634 "write_zeroes": true, 00:08:43.634 "zcopy": false, 00:08:43.634 "get_zone_info": false, 00:08:43.634 "zone_management": false, 00:08:43.634 "zone_append": false, 00:08:43.634 "compare": false, 00:08:43.634 "compare_and_write": false, 00:08:43.634 "abort": false, 00:08:43.634 "seek_hole": false, 00:08:43.634 "seek_data": false, 00:08:43.634 "copy": false, 00:08:43.634 "nvme_iov_md": false 00:08:43.634 }, 00:08:43.634 "memory_domains": [ 00:08:43.634 { 00:08:43.634 "dma_device_id": "system", 00:08:43.634 "dma_device_type": 1 00:08:43.634 }, 00:08:43.634 { 00:08:43.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.634 "dma_device_type": 2 00:08:43.634 }, 00:08:43.634 { 00:08:43.634 "dma_device_id": "system", 00:08:43.634 "dma_device_type": 1 00:08:43.634 }, 00:08:43.634 { 00:08:43.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.634 "dma_device_type": 2 00:08:43.634 } 00:08:43.634 ], 00:08:43.634 "driver_specific": { 00:08:43.634 "raid": { 00:08:43.634 "uuid": "628565ed-20e4-4326-86c2-b2234a8ef9d3", 00:08:43.634 "strip_size_kb": 0, 00:08:43.634 "state": "online", 00:08:43.634 "raid_level": "raid1", 00:08:43.634 "superblock": true, 00:08:43.634 "num_base_bdevs": 2, 00:08:43.634 "num_base_bdevs_discovered": 2, 00:08:43.634 "num_base_bdevs_operational": 2, 00:08:43.634 "base_bdevs_list": [ 00:08:43.634 { 00:08:43.634 "name": "pt1", 00:08:43.634 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.634 "is_configured": true, 00:08:43.634 "data_offset": 2048, 00:08:43.634 "data_size": 63488 00:08:43.634 }, 00:08:43.634 { 00:08:43.634 "name": "pt2", 00:08:43.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.634 "is_configured": true, 00:08:43.634 "data_offset": 2048, 00:08:43.634 "data_size": 63488 00:08:43.634 } 00:08:43.634 ] 00:08:43.634 } 00:08:43.634 } 00:08:43.634 }' 00:08:43.634 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:43.894 pt2' 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.894 [2024-10-13 02:23:02.437124] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=628565ed-20e4-4326-86c2-b2234a8ef9d3 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 628565ed-20e4-4326-86c2-b2234a8ef9d3 ']' 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.894 [2024-10-13 02:23:02.464827] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:43.894 [2024-10-13 02:23:02.464953] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.894 [2024-10-13 02:23:02.465054] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.894 [2024-10-13 02:23:02.465117] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.894 [2024-10-13 02:23:02.465127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:43.894 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:43.895 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.155 [2024-10-13 02:23:02.600611] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:44.155 [2024-10-13 02:23:02.602528] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:44.155 [2024-10-13 02:23:02.602637] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:44.155 [2024-10-13 02:23:02.602724] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:44.155 [2024-10-13 02:23:02.602764] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.155 [2024-10-13 02:23:02.602786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:44.155 request: 00:08:44.155 { 00:08:44.155 "name": "raid_bdev1", 00:08:44.155 "raid_level": "raid1", 00:08:44.155 "base_bdevs": [ 00:08:44.155 "malloc1", 00:08:44.155 "malloc2" 00:08:44.155 ], 00:08:44.155 "superblock": false, 00:08:44.155 "method": "bdev_raid_create", 00:08:44.155 "req_id": 1 00:08:44.155 } 00:08:44.155 Got JSON-RPC error response 00:08:44.155 response: 00:08:44.155 { 00:08:44.155 "code": -17, 00:08:44.155 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:44.155 } 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.155 [2024-10-13 02:23:02.664432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:44.155 [2024-10-13 02:23:02.664540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.155 [2024-10-13 02:23:02.664580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:44.155 [2024-10-13 02:23:02.664604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.155 [2024-10-13 02:23:02.666694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.155 [2024-10-13 02:23:02.666763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:44.155 [2024-10-13 02:23:02.666858] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:44.155 [2024-10-13 02:23:02.666939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:44.155 pt1 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.155 "name": "raid_bdev1", 00:08:44.155 "uuid": "628565ed-20e4-4326-86c2-b2234a8ef9d3", 00:08:44.155 "strip_size_kb": 0, 00:08:44.155 "state": "configuring", 00:08:44.155 "raid_level": "raid1", 00:08:44.155 "superblock": true, 00:08:44.155 "num_base_bdevs": 2, 00:08:44.155 "num_base_bdevs_discovered": 1, 00:08:44.155 "num_base_bdevs_operational": 2, 00:08:44.155 "base_bdevs_list": [ 00:08:44.155 { 00:08:44.155 "name": "pt1", 00:08:44.155 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.155 "is_configured": true, 00:08:44.155 "data_offset": 2048, 00:08:44.155 "data_size": 63488 00:08:44.155 }, 00:08:44.155 { 00:08:44.155 "name": null, 00:08:44.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.155 "is_configured": false, 00:08:44.155 "data_offset": 2048, 00:08:44.155 "data_size": 63488 00:08:44.155 } 00:08:44.155 ] 00:08:44.155 }' 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.155 02:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.416 [2024-10-13 02:23:03.059794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:44.416 [2024-10-13 02:23:03.059948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.416 [2024-10-13 02:23:03.059990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:44.416 [2024-10-13 02:23:03.060018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.416 [2024-10-13 02:23:03.060448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.416 [2024-10-13 02:23:03.060505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:44.416 [2024-10-13 02:23:03.060594] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:44.416 [2024-10-13 02:23:03.060616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:44.416 [2024-10-13 02:23:03.060714] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:44.416 [2024-10-13 02:23:03.060723] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:44.416 [2024-10-13 02:23:03.060978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:44.416 [2024-10-13 02:23:03.061091] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:44.416 [2024-10-13 02:23:03.061104] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:44.416 [2024-10-13 02:23:03.061207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.416 pt2 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.416 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.676 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.676 "name": "raid_bdev1", 00:08:44.676 "uuid": "628565ed-20e4-4326-86c2-b2234a8ef9d3", 00:08:44.676 "strip_size_kb": 0, 00:08:44.676 "state": "online", 00:08:44.676 "raid_level": "raid1", 00:08:44.676 "superblock": true, 00:08:44.676 "num_base_bdevs": 2, 00:08:44.676 "num_base_bdevs_discovered": 2, 00:08:44.676 "num_base_bdevs_operational": 2, 00:08:44.676 "base_bdevs_list": [ 00:08:44.676 { 00:08:44.676 "name": "pt1", 00:08:44.676 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.676 "is_configured": true, 00:08:44.676 "data_offset": 2048, 00:08:44.676 "data_size": 63488 00:08:44.676 }, 00:08:44.676 { 00:08:44.676 "name": "pt2", 00:08:44.676 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.676 "is_configured": true, 00:08:44.676 "data_offset": 2048, 00:08:44.676 "data_size": 63488 00:08:44.676 } 00:08:44.676 ] 00:08:44.676 }' 00:08:44.676 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.676 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.936 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:44.936 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:44.936 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.936 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.936 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.936 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.936 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.936 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.936 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.936 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.936 [2024-10-13 02:23:03.511323] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.936 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.936 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.936 "name": "raid_bdev1", 00:08:44.936 "aliases": [ 00:08:44.936 "628565ed-20e4-4326-86c2-b2234a8ef9d3" 00:08:44.936 ], 00:08:44.936 "product_name": "Raid Volume", 00:08:44.936 "block_size": 512, 00:08:44.936 "num_blocks": 63488, 00:08:44.936 "uuid": "628565ed-20e4-4326-86c2-b2234a8ef9d3", 00:08:44.936 "assigned_rate_limits": { 00:08:44.936 "rw_ios_per_sec": 0, 00:08:44.936 "rw_mbytes_per_sec": 0, 00:08:44.936 "r_mbytes_per_sec": 0, 00:08:44.936 "w_mbytes_per_sec": 0 00:08:44.936 }, 00:08:44.936 "claimed": false, 00:08:44.936 "zoned": false, 00:08:44.936 "supported_io_types": { 00:08:44.936 "read": true, 00:08:44.936 "write": true, 00:08:44.936 "unmap": false, 00:08:44.936 "flush": false, 00:08:44.936 "reset": true, 00:08:44.936 "nvme_admin": false, 00:08:44.936 "nvme_io": false, 00:08:44.936 "nvme_io_md": false, 00:08:44.936 "write_zeroes": true, 00:08:44.936 "zcopy": false, 00:08:44.936 "get_zone_info": false, 00:08:44.936 "zone_management": false, 00:08:44.936 "zone_append": false, 00:08:44.936 "compare": false, 00:08:44.936 "compare_and_write": false, 00:08:44.936 "abort": false, 00:08:44.936 "seek_hole": false, 00:08:44.936 "seek_data": false, 00:08:44.936 "copy": false, 00:08:44.936 "nvme_iov_md": false 00:08:44.936 }, 00:08:44.936 "memory_domains": [ 00:08:44.937 { 00:08:44.937 "dma_device_id": "system", 00:08:44.937 "dma_device_type": 1 00:08:44.937 }, 00:08:44.937 { 00:08:44.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.937 "dma_device_type": 2 00:08:44.937 }, 00:08:44.937 { 00:08:44.937 "dma_device_id": "system", 00:08:44.937 "dma_device_type": 1 00:08:44.937 }, 00:08:44.937 { 00:08:44.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.937 "dma_device_type": 2 00:08:44.937 } 00:08:44.937 ], 00:08:44.937 "driver_specific": { 00:08:44.937 "raid": { 00:08:44.937 "uuid": "628565ed-20e4-4326-86c2-b2234a8ef9d3", 00:08:44.937 "strip_size_kb": 0, 00:08:44.937 "state": "online", 00:08:44.937 "raid_level": "raid1", 00:08:44.937 "superblock": true, 00:08:44.937 "num_base_bdevs": 2, 00:08:44.937 "num_base_bdevs_discovered": 2, 00:08:44.937 "num_base_bdevs_operational": 2, 00:08:44.937 "base_bdevs_list": [ 00:08:44.937 { 00:08:44.937 "name": "pt1", 00:08:44.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.937 "is_configured": true, 00:08:44.937 "data_offset": 2048, 00:08:44.937 "data_size": 63488 00:08:44.937 }, 00:08:44.937 { 00:08:44.937 "name": "pt2", 00:08:44.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.937 "is_configured": true, 00:08:44.937 "data_offset": 2048, 00:08:44.937 "data_size": 63488 00:08:44.937 } 00:08:44.937 ] 00:08:44.937 } 00:08:44.937 } 00:08:44.937 }' 00:08:44.937 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.937 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:44.937 pt2' 00:08:44.937 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.937 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.937 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.937 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:44.937 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.937 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.937 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.197 [2024-10-13 02:23:03.722985] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 628565ed-20e4-4326-86c2-b2234a8ef9d3 '!=' 628565ed-20e4-4326-86c2-b2234a8ef9d3 ']' 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.197 [2024-10-13 02:23:03.770679] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:45.197 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.198 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:45.198 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.198 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.198 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.198 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.198 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:45.198 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.198 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.198 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.198 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.198 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.198 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.198 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.198 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.198 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.198 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.198 "name": "raid_bdev1", 00:08:45.198 "uuid": "628565ed-20e4-4326-86c2-b2234a8ef9d3", 00:08:45.198 "strip_size_kb": 0, 00:08:45.198 "state": "online", 00:08:45.198 "raid_level": "raid1", 00:08:45.198 "superblock": true, 00:08:45.198 "num_base_bdevs": 2, 00:08:45.198 "num_base_bdevs_discovered": 1, 00:08:45.198 "num_base_bdevs_operational": 1, 00:08:45.198 "base_bdevs_list": [ 00:08:45.198 { 00:08:45.198 "name": null, 00:08:45.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.198 "is_configured": false, 00:08:45.198 "data_offset": 0, 00:08:45.198 "data_size": 63488 00:08:45.198 }, 00:08:45.198 { 00:08:45.198 "name": "pt2", 00:08:45.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.198 "is_configured": true, 00:08:45.198 "data_offset": 2048, 00:08:45.198 "data_size": 63488 00:08:45.198 } 00:08:45.198 ] 00:08:45.198 }' 00:08:45.198 02:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.198 02:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.770 [2024-10-13 02:23:04.174059] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:45.770 [2024-10-13 02:23:04.174108] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.770 [2024-10-13 02:23:04.174201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.770 [2024-10-13 02:23:04.174252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.770 [2024-10-13 02:23:04.174261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.770 [2024-10-13 02:23:04.245945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:45.770 [2024-10-13 02:23:04.246128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.770 [2024-10-13 02:23:04.246170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:45.770 [2024-10-13 02:23:04.246181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.770 [2024-10-13 02:23:04.248394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.770 [2024-10-13 02:23:04.248439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:45.770 [2024-10-13 02:23:04.248533] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:45.770 [2024-10-13 02:23:04.248569] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:45.770 [2024-10-13 02:23:04.248650] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:45.770 [2024-10-13 02:23:04.248666] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:45.770 [2024-10-13 02:23:04.248922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:45.770 [2024-10-13 02:23:04.249034] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:45.770 [2024-10-13 02:23:04.249044] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:45.770 [2024-10-13 02:23:04.249174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.770 pt2 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.770 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.771 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:45.771 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.771 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.771 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.771 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.771 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.771 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.771 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.771 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.771 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.771 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.771 "name": "raid_bdev1", 00:08:45.771 "uuid": "628565ed-20e4-4326-86c2-b2234a8ef9d3", 00:08:45.771 "strip_size_kb": 0, 00:08:45.771 "state": "online", 00:08:45.771 "raid_level": "raid1", 00:08:45.771 "superblock": true, 00:08:45.771 "num_base_bdevs": 2, 00:08:45.771 "num_base_bdevs_discovered": 1, 00:08:45.771 "num_base_bdevs_operational": 1, 00:08:45.771 "base_bdevs_list": [ 00:08:45.771 { 00:08:45.771 "name": null, 00:08:45.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.771 "is_configured": false, 00:08:45.771 "data_offset": 2048, 00:08:45.771 "data_size": 63488 00:08:45.771 }, 00:08:45.771 { 00:08:45.771 "name": "pt2", 00:08:45.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.771 "is_configured": true, 00:08:45.771 "data_offset": 2048, 00:08:45.771 "data_size": 63488 00:08:45.771 } 00:08:45.771 ] 00:08:45.771 }' 00:08:45.771 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.771 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.031 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:46.031 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.031 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.031 [2024-10-13 02:23:04.649216] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.031 [2024-10-13 02:23:04.649312] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.031 [2024-10-13 02:23:04.649423] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.031 [2024-10-13 02:23:04.649489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.031 [2024-10-13 02:23:04.649535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:46.031 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.031 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.031 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:46.031 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.031 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.031 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.031 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:46.031 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:46.031 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:46.031 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:46.031 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.031 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.031 [2024-10-13 02:23:04.709130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:46.031 [2024-10-13 02:23:04.709251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.031 [2024-10-13 02:23:04.709285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:08:46.031 [2024-10-13 02:23:04.709317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.031 [2024-10-13 02:23:04.711459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.031 [2024-10-13 02:23:04.711538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:46.031 [2024-10-13 02:23:04.711639] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:46.031 [2024-10-13 02:23:04.711715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:46.031 [2024-10-13 02:23:04.711854] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:46.031 [2024-10-13 02:23:04.711927] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.031 [2024-10-13 02:23:04.711967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:08:46.031 [2024-10-13 02:23:04.712047] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:46.031 [2024-10-13 02:23:04.712147] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:08:46.031 [2024-10-13 02:23:04.712187] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:46.031 [2024-10-13 02:23:04.712413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:46.031 [2024-10-13 02:23:04.712562] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:08:46.031 [2024-10-13 02:23:04.712602] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:08:46.031 [2024-10-13 02:23:04.712746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.291 pt1 00:08:46.291 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.291 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:46.292 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:46.292 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.292 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.292 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.292 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.292 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:46.292 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.292 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.292 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.292 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.292 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.292 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.292 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.292 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.292 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.292 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.292 "name": "raid_bdev1", 00:08:46.292 "uuid": "628565ed-20e4-4326-86c2-b2234a8ef9d3", 00:08:46.292 "strip_size_kb": 0, 00:08:46.292 "state": "online", 00:08:46.292 "raid_level": "raid1", 00:08:46.292 "superblock": true, 00:08:46.292 "num_base_bdevs": 2, 00:08:46.292 "num_base_bdevs_discovered": 1, 00:08:46.292 "num_base_bdevs_operational": 1, 00:08:46.292 "base_bdevs_list": [ 00:08:46.292 { 00:08:46.292 "name": null, 00:08:46.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.292 "is_configured": false, 00:08:46.292 "data_offset": 2048, 00:08:46.292 "data_size": 63488 00:08:46.292 }, 00:08:46.292 { 00:08:46.292 "name": "pt2", 00:08:46.292 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.292 "is_configured": true, 00:08:46.292 "data_offset": 2048, 00:08:46.292 "data_size": 63488 00:08:46.292 } 00:08:46.292 ] 00:08:46.292 }' 00:08:46.292 02:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.292 02:23:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.552 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:46.552 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:46.552 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.552 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.552 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.552 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:46.552 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:46.552 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.552 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.552 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:46.552 [2024-10-13 02:23:05.216504] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.552 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.811 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 628565ed-20e4-4326-86c2-b2234a8ef9d3 '!=' 628565ed-20e4-4326-86c2-b2234a8ef9d3 ']' 00:08:46.811 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74340 00:08:46.811 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74340 ']' 00:08:46.811 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74340 00:08:46.811 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:46.812 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:46.812 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74340 00:08:46.812 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:46.812 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:46.812 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74340' 00:08:46.812 killing process with pid 74340 00:08:46.812 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74340 00:08:46.812 [2024-10-13 02:23:05.305677] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.812 [2024-10-13 02:23:05.305788] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.812 [2024-10-13 02:23:05.305838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.812 [2024-10-13 02:23:05.305848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:08:46.812 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74340 00:08:46.812 [2024-10-13 02:23:05.329405] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.074 02:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:47.074 00:08:47.074 real 0m4.715s 00:08:47.074 user 0m7.612s 00:08:47.074 sys 0m0.994s 00:08:47.074 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.074 ************************************ 00:08:47.074 END TEST raid_superblock_test 00:08:47.074 ************************************ 00:08:47.074 02:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.074 02:23:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:47.074 02:23:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:47.074 02:23:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.074 02:23:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.074 ************************************ 00:08:47.074 START TEST raid_read_error_test 00:08:47.074 ************************************ 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.T4fUJhui4B 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74658 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74658 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74658 ']' 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.074 02:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.074 [2024-10-13 02:23:05.744715] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:47.074 [2024-10-13 02:23:05.744837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74658 ] 00:08:47.354 [2024-10-13 02:23:05.888595] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.354 [2024-10-13 02:23:05.936600] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.354 [2024-10-13 02:23:05.979485] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.354 [2024-10-13 02:23:05.979522] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.936 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.936 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:47.936 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:47.936 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:47.936 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.936 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.936 BaseBdev1_malloc 00:08:47.936 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.936 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:47.936 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.936 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.936 true 00:08:47.936 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.936 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:47.936 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.936 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.936 [2024-10-13 02:23:06.614692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:47.936 [2024-10-13 02:23:06.614860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.937 [2024-10-13 02:23:06.614913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:47.937 [2024-10-13 02:23:06.614953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.937 [2024-10-13 02:23:06.617098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.937 [2024-10-13 02:23:06.617172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:48.197 BaseBdev1 00:08:48.197 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.197 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.197 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:48.197 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.197 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.197 BaseBdev2_malloc 00:08:48.197 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.197 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:48.197 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.197 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.197 true 00:08:48.197 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.197 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:48.197 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.197 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.197 [2024-10-13 02:23:06.665843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:48.197 [2024-10-13 02:23:06.666004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.197 [2024-10-13 02:23:06.666045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:48.197 [2024-10-13 02:23:06.666073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.197 [2024-10-13 02:23:06.668185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.197 [2024-10-13 02:23:06.668226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:48.197 BaseBdev2 00:08:48.197 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.197 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:48.197 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.197 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.197 [2024-10-13 02:23:06.677906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.197 [2024-10-13 02:23:06.679812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.197 [2024-10-13 02:23:06.680078] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:48.197 [2024-10-13 02:23:06.680125] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:48.197 [2024-10-13 02:23:06.680403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:48.197 [2024-10-13 02:23:06.680568] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:48.197 [2024-10-13 02:23:06.680610] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:48.197 [2024-10-13 02:23:06.680787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.198 "name": "raid_bdev1", 00:08:48.198 "uuid": "d3ac2a9f-6f8c-431a-bd4c-6592b3987f86", 00:08:48.198 "strip_size_kb": 0, 00:08:48.198 "state": "online", 00:08:48.198 "raid_level": "raid1", 00:08:48.198 "superblock": true, 00:08:48.198 "num_base_bdevs": 2, 00:08:48.198 "num_base_bdevs_discovered": 2, 00:08:48.198 "num_base_bdevs_operational": 2, 00:08:48.198 "base_bdevs_list": [ 00:08:48.198 { 00:08:48.198 "name": "BaseBdev1", 00:08:48.198 "uuid": "c15a9d3a-d6d8-5809-8594-3a63b9cfa0a1", 00:08:48.198 "is_configured": true, 00:08:48.198 "data_offset": 2048, 00:08:48.198 "data_size": 63488 00:08:48.198 }, 00:08:48.198 { 00:08:48.198 "name": "BaseBdev2", 00:08:48.198 "uuid": "19aaf906-daa2-54fa-a01e-a1234f1e1d2b", 00:08:48.198 "is_configured": true, 00:08:48.198 "data_offset": 2048, 00:08:48.198 "data_size": 63488 00:08:48.198 } 00:08:48.198 ] 00:08:48.198 }' 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.198 02:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.458 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:48.458 02:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:48.718 [2024-10-13 02:23:07.217444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.658 "name": "raid_bdev1", 00:08:49.658 "uuid": "d3ac2a9f-6f8c-431a-bd4c-6592b3987f86", 00:08:49.658 "strip_size_kb": 0, 00:08:49.658 "state": "online", 00:08:49.658 "raid_level": "raid1", 00:08:49.658 "superblock": true, 00:08:49.658 "num_base_bdevs": 2, 00:08:49.658 "num_base_bdevs_discovered": 2, 00:08:49.658 "num_base_bdevs_operational": 2, 00:08:49.658 "base_bdevs_list": [ 00:08:49.658 { 00:08:49.658 "name": "BaseBdev1", 00:08:49.658 "uuid": "c15a9d3a-d6d8-5809-8594-3a63b9cfa0a1", 00:08:49.658 "is_configured": true, 00:08:49.658 "data_offset": 2048, 00:08:49.658 "data_size": 63488 00:08:49.658 }, 00:08:49.658 { 00:08:49.658 "name": "BaseBdev2", 00:08:49.658 "uuid": "19aaf906-daa2-54fa-a01e-a1234f1e1d2b", 00:08:49.658 "is_configured": true, 00:08:49.658 "data_offset": 2048, 00:08:49.658 "data_size": 63488 00:08:49.658 } 00:08:49.658 ] 00:08:49.658 }' 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.658 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.918 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:49.918 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.918 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.918 [2024-10-13 02:23:08.581698] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:49.918 [2024-10-13 02:23:08.581820] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.918 [2024-10-13 02:23:08.584327] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.918 [2024-10-13 02:23:08.584412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.918 [2024-10-13 02:23:08.584517] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.918 [2024-10-13 02:23:08.584579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:49.918 { 00:08:49.918 "results": [ 00:08:49.918 { 00:08:49.918 "job": "raid_bdev1", 00:08:49.918 "core_mask": "0x1", 00:08:49.918 "workload": "randrw", 00:08:49.918 "percentage": 50, 00:08:49.918 "status": "finished", 00:08:49.918 "queue_depth": 1, 00:08:49.918 "io_size": 131072, 00:08:49.918 "runtime": 1.364902, 00:08:49.918 "iops": 18940.553973838414, 00:08:49.918 "mibps": 2367.5692467298018, 00:08:49.918 "io_failed": 0, 00:08:49.918 "io_timeout": 0, 00:08:49.918 "avg_latency_us": 50.18081426892888, 00:08:49.918 "min_latency_us": 22.134497816593885, 00:08:49.918 "max_latency_us": 1588.317903930131 00:08:49.918 } 00:08:49.918 ], 00:08:49.918 "core_count": 1 00:08:49.918 } 00:08:49.918 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.918 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74658 00:08:49.918 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74658 ']' 00:08:49.918 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74658 00:08:49.918 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:49.918 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.918 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74658 00:08:50.178 killing process with pid 74658 00:08:50.178 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.178 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.178 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74658' 00:08:50.178 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74658 00:08:50.178 [2024-10-13 02:23:08.620713] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.178 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74658 00:08:50.178 [2024-10-13 02:23:08.637032] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.178 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.T4fUJhui4B 00:08:50.178 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:50.178 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:50.438 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:50.438 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:50.438 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:50.438 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:50.438 02:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:50.438 00:08:50.438 real 0m3.226s 00:08:50.438 user 0m4.088s 00:08:50.438 sys 0m0.495s 00:08:50.438 ************************************ 00:08:50.438 END TEST raid_read_error_test 00:08:50.438 ************************************ 00:08:50.438 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.438 02:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.438 02:23:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:50.438 02:23:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:50.438 02:23:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.438 02:23:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.438 ************************************ 00:08:50.438 START TEST raid_write_error_test 00:08:50.438 ************************************ 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.M6VlWDfsz6 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74788 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74788 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74788 ']' 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.438 02:23:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.438 [2024-10-13 02:23:09.040606] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:50.438 [2024-10-13 02:23:09.040732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74788 ] 00:08:50.698 [2024-10-13 02:23:09.168266] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.698 [2024-10-13 02:23:09.217917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.698 [2024-10-13 02:23:09.262066] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.698 [2024-10-13 02:23:09.262106] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.267 BaseBdev1_malloc 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.267 true 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.267 [2024-10-13 02:23:09.929798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:51.267 [2024-10-13 02:23:09.929966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.267 [2024-10-13 02:23:09.930020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:51.267 [2024-10-13 02:23:09.930054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.267 [2024-10-13 02:23:09.932246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.267 [2024-10-13 02:23:09.932324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:51.267 BaseBdev1 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.267 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.527 BaseBdev2_malloc 00:08:51.527 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.527 02:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:51.527 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.527 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.527 true 00:08:51.527 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.527 02:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:51.527 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.527 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.527 [2024-10-13 02:23:09.980642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:51.527 [2024-10-13 02:23:09.980714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.527 [2024-10-13 02:23:09.980736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:51.527 [2024-10-13 02:23:09.980746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.527 [2024-10-13 02:23:09.982824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.527 [2024-10-13 02:23:09.982866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:51.527 BaseBdev2 00:08:51.527 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.527 02:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:51.527 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.527 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.527 [2024-10-13 02:23:09.992713] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.528 [2024-10-13 02:23:09.994658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.528 [2024-10-13 02:23:09.994907] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:51.528 [2024-10-13 02:23:09.994967] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:51.528 [2024-10-13 02:23:09.995262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:08:51.528 [2024-10-13 02:23:09.995445] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:51.528 [2024-10-13 02:23:09.995492] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:51.528 [2024-10-13 02:23:09.995661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.528 02:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.528 02:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:51.528 02:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.528 02:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.528 02:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.528 02:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.528 02:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.528 02:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.528 02:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.528 02:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.528 02:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.528 02:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.528 02:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.528 02:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.528 02:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.528 02:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.528 02:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.528 "name": "raid_bdev1", 00:08:51.528 "uuid": "65d19e0d-fec9-4af6-8d3b-114124a0c621", 00:08:51.528 "strip_size_kb": 0, 00:08:51.528 "state": "online", 00:08:51.528 "raid_level": "raid1", 00:08:51.528 "superblock": true, 00:08:51.528 "num_base_bdevs": 2, 00:08:51.528 "num_base_bdevs_discovered": 2, 00:08:51.528 "num_base_bdevs_operational": 2, 00:08:51.528 "base_bdevs_list": [ 00:08:51.528 { 00:08:51.528 "name": "BaseBdev1", 00:08:51.528 "uuid": "6a2fc5c4-661e-5f85-ac38-d23c44759d10", 00:08:51.528 "is_configured": true, 00:08:51.528 "data_offset": 2048, 00:08:51.528 "data_size": 63488 00:08:51.528 }, 00:08:51.528 { 00:08:51.528 "name": "BaseBdev2", 00:08:51.528 "uuid": "2f808d66-ad26-54f6-9bb2-39c48be8d4d8", 00:08:51.528 "is_configured": true, 00:08:51.528 "data_offset": 2048, 00:08:51.528 "data_size": 63488 00:08:51.528 } 00:08:51.528 ] 00:08:51.528 }' 00:08:51.528 02:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.528 02:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.787 02:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:51.787 02:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:52.046 [2024-10-13 02:23:10.552162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:52.985 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:52.985 02:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.985 02:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.985 [2024-10-13 02:23:11.481482] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:52.985 [2024-10-13 02:23:11.481645] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:52.985 [2024-10-13 02:23:11.481887] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002530 00:08:52.985 02:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.985 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:52.985 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:52.985 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:52.985 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:52.985 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:52.986 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.986 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.986 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.986 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.986 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:52.986 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.986 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.986 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.986 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.986 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.986 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.986 02:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.986 02:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.986 02:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.986 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.986 "name": "raid_bdev1", 00:08:52.986 "uuid": "65d19e0d-fec9-4af6-8d3b-114124a0c621", 00:08:52.986 "strip_size_kb": 0, 00:08:52.986 "state": "online", 00:08:52.986 "raid_level": "raid1", 00:08:52.986 "superblock": true, 00:08:52.986 "num_base_bdevs": 2, 00:08:52.986 "num_base_bdevs_discovered": 1, 00:08:52.986 "num_base_bdevs_operational": 1, 00:08:52.986 "base_bdevs_list": [ 00:08:52.986 { 00:08:52.986 "name": null, 00:08:52.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.986 "is_configured": false, 00:08:52.986 "data_offset": 0, 00:08:52.986 "data_size": 63488 00:08:52.986 }, 00:08:52.986 { 00:08:52.986 "name": "BaseBdev2", 00:08:52.986 "uuid": "2f808d66-ad26-54f6-9bb2-39c48be8d4d8", 00:08:52.986 "is_configured": true, 00:08:52.986 "data_offset": 2048, 00:08:52.986 "data_size": 63488 00:08:52.986 } 00:08:52.986 ] 00:08:52.986 }' 00:08:52.986 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.986 02:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.554 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:53.554 02:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.554 02:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.554 [2024-10-13 02:23:11.950745] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:53.554 [2024-10-13 02:23:11.950888] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.554 [2024-10-13 02:23:11.953318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.554 [2024-10-13 02:23:11.953406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.554 [2024-10-13 02:23:11.953477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.554 [2024-10-13 02:23:11.953533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:53.554 { 00:08:53.554 "results": [ 00:08:53.554 { 00:08:53.554 "job": "raid_bdev1", 00:08:53.554 "core_mask": "0x1", 00:08:53.554 "workload": "randrw", 00:08:53.554 "percentage": 50, 00:08:53.554 "status": "finished", 00:08:53.554 "queue_depth": 1, 00:08:53.554 "io_size": 131072, 00:08:53.554 "runtime": 1.39954, 00:08:53.554 "iops": 21650.685225145404, 00:08:53.554 "mibps": 2706.3356531431755, 00:08:53.554 "io_failed": 0, 00:08:53.554 "io_timeout": 0, 00:08:53.554 "avg_latency_us": 43.62003508034165, 00:08:53.554 "min_latency_us": 21.575545851528386, 00:08:53.554 "max_latency_us": 1359.3711790393013 00:08:53.554 } 00:08:53.554 ], 00:08:53.555 "core_count": 1 00:08:53.555 } 00:08:53.555 02:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.555 02:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74788 00:08:53.555 02:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74788 ']' 00:08:53.555 02:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74788 00:08:53.555 02:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:53.555 02:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.555 02:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74788 00:08:53.555 02:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:53.555 02:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:53.555 02:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74788' 00:08:53.555 killing process with pid 74788 00:08:53.555 02:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74788 00:08:53.555 [2024-10-13 02:23:12.002416] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:53.555 02:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74788 00:08:53.555 [2024-10-13 02:23:12.018453] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.814 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.M6VlWDfsz6 00:08:53.814 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:53.814 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:53.814 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:53.814 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:53.814 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:53.814 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:53.814 02:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:53.814 00:08:53.814 real 0m3.322s 00:08:53.814 user 0m4.250s 00:08:53.814 sys 0m0.509s 00:08:53.814 ************************************ 00:08:53.814 END TEST raid_write_error_test 00:08:53.814 ************************************ 00:08:53.814 02:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.814 02:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.814 02:23:12 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:53.814 02:23:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:53.814 02:23:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:53.814 02:23:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:53.814 02:23:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.814 02:23:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.814 ************************************ 00:08:53.814 START TEST raid_state_function_test 00:08:53.814 ************************************ 00:08:53.814 02:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:08:53.814 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:53.814 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:53.814 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:53.814 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:53.814 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:53.814 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.814 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:53.814 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.814 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.814 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:53.814 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.814 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.814 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74915 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74915' 00:08:53.815 Process raid pid: 74915 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74915 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 74915 ']' 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.815 02:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.815 [2024-10-13 02:23:12.431407] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:53.815 [2024-10-13 02:23:12.431612] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.075 [2024-10-13 02:23:12.561943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.075 [2024-10-13 02:23:12.610496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.075 [2024-10-13 02:23:12.653398] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.075 [2024-10-13 02:23:12.653512] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.652 [2024-10-13 02:23:13.271463] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:54.652 [2024-10-13 02:23:13.271612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:54.652 [2024-10-13 02:23:13.271656] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.652 [2024-10-13 02:23:13.271682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.652 [2024-10-13 02:23:13.271699] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:54.652 [2024-10-13 02:23:13.271725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.652 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.653 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.653 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.653 "name": "Existed_Raid", 00:08:54.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.653 "strip_size_kb": 64, 00:08:54.653 "state": "configuring", 00:08:54.653 "raid_level": "raid0", 00:08:54.653 "superblock": false, 00:08:54.653 "num_base_bdevs": 3, 00:08:54.653 "num_base_bdevs_discovered": 0, 00:08:54.653 "num_base_bdevs_operational": 3, 00:08:54.653 "base_bdevs_list": [ 00:08:54.653 { 00:08:54.653 "name": "BaseBdev1", 00:08:54.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.653 "is_configured": false, 00:08:54.653 "data_offset": 0, 00:08:54.653 "data_size": 0 00:08:54.653 }, 00:08:54.653 { 00:08:54.653 "name": "BaseBdev2", 00:08:54.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.653 "is_configured": false, 00:08:54.653 "data_offset": 0, 00:08:54.653 "data_size": 0 00:08:54.653 }, 00:08:54.653 { 00:08:54.653 "name": "BaseBdev3", 00:08:54.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.653 "is_configured": false, 00:08:54.653 "data_offset": 0, 00:08:54.653 "data_size": 0 00:08:54.653 } 00:08:54.653 ] 00:08:54.653 }' 00:08:54.653 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.653 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.223 [2024-10-13 02:23:13.750536] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.223 [2024-10-13 02:23:13.750641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.223 [2024-10-13 02:23:13.762535] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.223 [2024-10-13 02:23:13.762624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.223 [2024-10-13 02:23:13.762652] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.223 [2024-10-13 02:23:13.762675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.223 [2024-10-13 02:23:13.762694] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.223 [2024-10-13 02:23:13.762715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.223 [2024-10-13 02:23:13.783564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.223 BaseBdev1 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.223 [ 00:08:55.223 { 00:08:55.223 "name": "BaseBdev1", 00:08:55.223 "aliases": [ 00:08:55.223 "a177c47e-bdc1-440a-ae81-2c12c53ede2b" 00:08:55.223 ], 00:08:55.223 "product_name": "Malloc disk", 00:08:55.223 "block_size": 512, 00:08:55.223 "num_blocks": 65536, 00:08:55.223 "uuid": "a177c47e-bdc1-440a-ae81-2c12c53ede2b", 00:08:55.223 "assigned_rate_limits": { 00:08:55.223 "rw_ios_per_sec": 0, 00:08:55.223 "rw_mbytes_per_sec": 0, 00:08:55.223 "r_mbytes_per_sec": 0, 00:08:55.223 "w_mbytes_per_sec": 0 00:08:55.223 }, 00:08:55.223 "claimed": true, 00:08:55.223 "claim_type": "exclusive_write", 00:08:55.223 "zoned": false, 00:08:55.223 "supported_io_types": { 00:08:55.223 "read": true, 00:08:55.223 "write": true, 00:08:55.223 "unmap": true, 00:08:55.223 "flush": true, 00:08:55.223 "reset": true, 00:08:55.223 "nvme_admin": false, 00:08:55.223 "nvme_io": false, 00:08:55.223 "nvme_io_md": false, 00:08:55.223 "write_zeroes": true, 00:08:55.223 "zcopy": true, 00:08:55.223 "get_zone_info": false, 00:08:55.223 "zone_management": false, 00:08:55.223 "zone_append": false, 00:08:55.223 "compare": false, 00:08:55.223 "compare_and_write": false, 00:08:55.223 "abort": true, 00:08:55.223 "seek_hole": false, 00:08:55.223 "seek_data": false, 00:08:55.223 "copy": true, 00:08:55.223 "nvme_iov_md": false 00:08:55.223 }, 00:08:55.223 "memory_domains": [ 00:08:55.223 { 00:08:55.223 "dma_device_id": "system", 00:08:55.223 "dma_device_type": 1 00:08:55.223 }, 00:08:55.223 { 00:08:55.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.223 "dma_device_type": 2 00:08:55.223 } 00:08:55.223 ], 00:08:55.223 "driver_specific": {} 00:08:55.223 } 00:08:55.223 ] 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.223 "name": "Existed_Raid", 00:08:55.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.223 "strip_size_kb": 64, 00:08:55.223 "state": "configuring", 00:08:55.223 "raid_level": "raid0", 00:08:55.223 "superblock": false, 00:08:55.223 "num_base_bdevs": 3, 00:08:55.223 "num_base_bdevs_discovered": 1, 00:08:55.223 "num_base_bdevs_operational": 3, 00:08:55.223 "base_bdevs_list": [ 00:08:55.223 { 00:08:55.223 "name": "BaseBdev1", 00:08:55.223 "uuid": "a177c47e-bdc1-440a-ae81-2c12c53ede2b", 00:08:55.223 "is_configured": true, 00:08:55.223 "data_offset": 0, 00:08:55.223 "data_size": 65536 00:08:55.223 }, 00:08:55.223 { 00:08:55.223 "name": "BaseBdev2", 00:08:55.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.223 "is_configured": false, 00:08:55.223 "data_offset": 0, 00:08:55.223 "data_size": 0 00:08:55.223 }, 00:08:55.223 { 00:08:55.223 "name": "BaseBdev3", 00:08:55.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.223 "is_configured": false, 00:08:55.223 "data_offset": 0, 00:08:55.223 "data_size": 0 00:08:55.223 } 00:08:55.223 ] 00:08:55.223 }' 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.223 02:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.792 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.792 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.792 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.792 [2024-10-13 02:23:14.274847] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.792 [2024-10-13 02:23:14.275024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:55.792 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.792 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.792 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.792 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.792 [2024-10-13 02:23:14.286855] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.792 [2024-10-13 02:23:14.288718] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.792 [2024-10-13 02:23:14.288800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.792 [2024-10-13 02:23:14.288828] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.792 [2024-10-13 02:23:14.288851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.792 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.792 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:55.792 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.792 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:55.792 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.793 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.793 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.793 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.793 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.793 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.793 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.793 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.793 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.793 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.793 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.793 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.793 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.793 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.793 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.793 "name": "Existed_Raid", 00:08:55.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.793 "strip_size_kb": 64, 00:08:55.793 "state": "configuring", 00:08:55.793 "raid_level": "raid0", 00:08:55.793 "superblock": false, 00:08:55.793 "num_base_bdevs": 3, 00:08:55.793 "num_base_bdevs_discovered": 1, 00:08:55.793 "num_base_bdevs_operational": 3, 00:08:55.793 "base_bdevs_list": [ 00:08:55.793 { 00:08:55.793 "name": "BaseBdev1", 00:08:55.793 "uuid": "a177c47e-bdc1-440a-ae81-2c12c53ede2b", 00:08:55.793 "is_configured": true, 00:08:55.793 "data_offset": 0, 00:08:55.793 "data_size": 65536 00:08:55.793 }, 00:08:55.793 { 00:08:55.793 "name": "BaseBdev2", 00:08:55.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.793 "is_configured": false, 00:08:55.793 "data_offset": 0, 00:08:55.793 "data_size": 0 00:08:55.793 }, 00:08:55.793 { 00:08:55.793 "name": "BaseBdev3", 00:08:55.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.793 "is_configured": false, 00:08:55.793 "data_offset": 0, 00:08:55.793 "data_size": 0 00:08:55.793 } 00:08:55.793 ] 00:08:55.793 }' 00:08:55.793 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.793 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.363 [2024-10-13 02:23:14.783658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.363 BaseBdev2 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.363 [ 00:08:56.363 { 00:08:56.363 "name": "BaseBdev2", 00:08:56.363 "aliases": [ 00:08:56.363 "3652dc0c-3f61-447c-9376-3118fe3a4518" 00:08:56.363 ], 00:08:56.363 "product_name": "Malloc disk", 00:08:56.363 "block_size": 512, 00:08:56.363 "num_blocks": 65536, 00:08:56.363 "uuid": "3652dc0c-3f61-447c-9376-3118fe3a4518", 00:08:56.363 "assigned_rate_limits": { 00:08:56.363 "rw_ios_per_sec": 0, 00:08:56.363 "rw_mbytes_per_sec": 0, 00:08:56.363 "r_mbytes_per_sec": 0, 00:08:56.363 "w_mbytes_per_sec": 0 00:08:56.363 }, 00:08:56.363 "claimed": true, 00:08:56.363 "claim_type": "exclusive_write", 00:08:56.363 "zoned": false, 00:08:56.363 "supported_io_types": { 00:08:56.363 "read": true, 00:08:56.363 "write": true, 00:08:56.363 "unmap": true, 00:08:56.363 "flush": true, 00:08:56.363 "reset": true, 00:08:56.363 "nvme_admin": false, 00:08:56.363 "nvme_io": false, 00:08:56.363 "nvme_io_md": false, 00:08:56.363 "write_zeroes": true, 00:08:56.363 "zcopy": true, 00:08:56.363 "get_zone_info": false, 00:08:56.363 "zone_management": false, 00:08:56.363 "zone_append": false, 00:08:56.363 "compare": false, 00:08:56.363 "compare_and_write": false, 00:08:56.363 "abort": true, 00:08:56.363 "seek_hole": false, 00:08:56.363 "seek_data": false, 00:08:56.363 "copy": true, 00:08:56.363 "nvme_iov_md": false 00:08:56.363 }, 00:08:56.363 "memory_domains": [ 00:08:56.363 { 00:08:56.363 "dma_device_id": "system", 00:08:56.363 "dma_device_type": 1 00:08:56.363 }, 00:08:56.363 { 00:08:56.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.363 "dma_device_type": 2 00:08:56.363 } 00:08:56.363 ], 00:08:56.363 "driver_specific": {} 00:08:56.363 } 00:08:56.363 ] 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.363 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.363 "name": "Existed_Raid", 00:08:56.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.363 "strip_size_kb": 64, 00:08:56.363 "state": "configuring", 00:08:56.363 "raid_level": "raid0", 00:08:56.363 "superblock": false, 00:08:56.363 "num_base_bdevs": 3, 00:08:56.363 "num_base_bdevs_discovered": 2, 00:08:56.363 "num_base_bdevs_operational": 3, 00:08:56.363 "base_bdevs_list": [ 00:08:56.363 { 00:08:56.363 "name": "BaseBdev1", 00:08:56.363 "uuid": "a177c47e-bdc1-440a-ae81-2c12c53ede2b", 00:08:56.363 "is_configured": true, 00:08:56.363 "data_offset": 0, 00:08:56.363 "data_size": 65536 00:08:56.363 }, 00:08:56.364 { 00:08:56.364 "name": "BaseBdev2", 00:08:56.364 "uuid": "3652dc0c-3f61-447c-9376-3118fe3a4518", 00:08:56.364 "is_configured": true, 00:08:56.364 "data_offset": 0, 00:08:56.364 "data_size": 65536 00:08:56.364 }, 00:08:56.364 { 00:08:56.364 "name": "BaseBdev3", 00:08:56.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.364 "is_configured": false, 00:08:56.364 "data_offset": 0, 00:08:56.364 "data_size": 0 00:08:56.364 } 00:08:56.364 ] 00:08:56.364 }' 00:08:56.364 02:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.364 02:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.623 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:56.623 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.623 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.623 [2024-10-13 02:23:15.276147] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:56.623 [2024-10-13 02:23:15.276276] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:56.623 [2024-10-13 02:23:15.276313] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:56.623 [2024-10-13 02:23:15.276685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:56.623 [2024-10-13 02:23:15.276925] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:56.623 [2024-10-13 02:23:15.276973] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:56.623 [2024-10-13 02:23:15.277248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.623 BaseBdev3 00:08:56.623 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.623 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:56.623 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:56.623 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.623 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:56.623 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.623 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.624 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:56.624 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.624 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.624 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.624 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:56.624 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.624 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.624 [ 00:08:56.624 { 00:08:56.883 "name": "BaseBdev3", 00:08:56.883 "aliases": [ 00:08:56.883 "f3564ebc-e461-4251-9132-b416a7d04f0c" 00:08:56.883 ], 00:08:56.883 "product_name": "Malloc disk", 00:08:56.883 "block_size": 512, 00:08:56.883 "num_blocks": 65536, 00:08:56.883 "uuid": "f3564ebc-e461-4251-9132-b416a7d04f0c", 00:08:56.883 "assigned_rate_limits": { 00:08:56.883 "rw_ios_per_sec": 0, 00:08:56.883 "rw_mbytes_per_sec": 0, 00:08:56.883 "r_mbytes_per_sec": 0, 00:08:56.883 "w_mbytes_per_sec": 0 00:08:56.883 }, 00:08:56.883 "claimed": true, 00:08:56.883 "claim_type": "exclusive_write", 00:08:56.883 "zoned": false, 00:08:56.883 "supported_io_types": { 00:08:56.883 "read": true, 00:08:56.883 "write": true, 00:08:56.883 "unmap": true, 00:08:56.883 "flush": true, 00:08:56.883 "reset": true, 00:08:56.883 "nvme_admin": false, 00:08:56.883 "nvme_io": false, 00:08:56.883 "nvme_io_md": false, 00:08:56.883 "write_zeroes": true, 00:08:56.883 "zcopy": true, 00:08:56.883 "get_zone_info": false, 00:08:56.883 "zone_management": false, 00:08:56.883 "zone_append": false, 00:08:56.883 "compare": false, 00:08:56.883 "compare_and_write": false, 00:08:56.883 "abort": true, 00:08:56.883 "seek_hole": false, 00:08:56.883 "seek_data": false, 00:08:56.883 "copy": true, 00:08:56.883 "nvme_iov_md": false 00:08:56.883 }, 00:08:56.883 "memory_domains": [ 00:08:56.883 { 00:08:56.883 "dma_device_id": "system", 00:08:56.883 "dma_device_type": 1 00:08:56.883 }, 00:08:56.883 { 00:08:56.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.883 "dma_device_type": 2 00:08:56.884 } 00:08:56.884 ], 00:08:56.884 "driver_specific": {} 00:08:56.884 } 00:08:56.884 ] 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.884 "name": "Existed_Raid", 00:08:56.884 "uuid": "8b294323-8f6b-4506-8359-6ef89dae7e2e", 00:08:56.884 "strip_size_kb": 64, 00:08:56.884 "state": "online", 00:08:56.884 "raid_level": "raid0", 00:08:56.884 "superblock": false, 00:08:56.884 "num_base_bdevs": 3, 00:08:56.884 "num_base_bdevs_discovered": 3, 00:08:56.884 "num_base_bdevs_operational": 3, 00:08:56.884 "base_bdevs_list": [ 00:08:56.884 { 00:08:56.884 "name": "BaseBdev1", 00:08:56.884 "uuid": "a177c47e-bdc1-440a-ae81-2c12c53ede2b", 00:08:56.884 "is_configured": true, 00:08:56.884 "data_offset": 0, 00:08:56.884 "data_size": 65536 00:08:56.884 }, 00:08:56.884 { 00:08:56.884 "name": "BaseBdev2", 00:08:56.884 "uuid": "3652dc0c-3f61-447c-9376-3118fe3a4518", 00:08:56.884 "is_configured": true, 00:08:56.884 "data_offset": 0, 00:08:56.884 "data_size": 65536 00:08:56.884 }, 00:08:56.884 { 00:08:56.884 "name": "BaseBdev3", 00:08:56.884 "uuid": "f3564ebc-e461-4251-9132-b416a7d04f0c", 00:08:56.884 "is_configured": true, 00:08:56.884 "data_offset": 0, 00:08:56.884 "data_size": 65536 00:08:56.884 } 00:08:56.884 ] 00:08:56.884 }' 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.884 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.144 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:57.144 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:57.144 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.144 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.144 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.144 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.144 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:57.144 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.144 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.144 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.144 [2024-10-13 02:23:15.767712] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.144 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.144 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.144 "name": "Existed_Raid", 00:08:57.144 "aliases": [ 00:08:57.144 "8b294323-8f6b-4506-8359-6ef89dae7e2e" 00:08:57.144 ], 00:08:57.144 "product_name": "Raid Volume", 00:08:57.144 "block_size": 512, 00:08:57.144 "num_blocks": 196608, 00:08:57.144 "uuid": "8b294323-8f6b-4506-8359-6ef89dae7e2e", 00:08:57.144 "assigned_rate_limits": { 00:08:57.144 "rw_ios_per_sec": 0, 00:08:57.144 "rw_mbytes_per_sec": 0, 00:08:57.144 "r_mbytes_per_sec": 0, 00:08:57.144 "w_mbytes_per_sec": 0 00:08:57.144 }, 00:08:57.144 "claimed": false, 00:08:57.144 "zoned": false, 00:08:57.144 "supported_io_types": { 00:08:57.144 "read": true, 00:08:57.144 "write": true, 00:08:57.144 "unmap": true, 00:08:57.144 "flush": true, 00:08:57.144 "reset": true, 00:08:57.144 "nvme_admin": false, 00:08:57.144 "nvme_io": false, 00:08:57.144 "nvme_io_md": false, 00:08:57.144 "write_zeroes": true, 00:08:57.144 "zcopy": false, 00:08:57.144 "get_zone_info": false, 00:08:57.144 "zone_management": false, 00:08:57.144 "zone_append": false, 00:08:57.144 "compare": false, 00:08:57.144 "compare_and_write": false, 00:08:57.144 "abort": false, 00:08:57.144 "seek_hole": false, 00:08:57.144 "seek_data": false, 00:08:57.144 "copy": false, 00:08:57.144 "nvme_iov_md": false 00:08:57.144 }, 00:08:57.144 "memory_domains": [ 00:08:57.144 { 00:08:57.144 "dma_device_id": "system", 00:08:57.144 "dma_device_type": 1 00:08:57.144 }, 00:08:57.144 { 00:08:57.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.144 "dma_device_type": 2 00:08:57.144 }, 00:08:57.144 { 00:08:57.144 "dma_device_id": "system", 00:08:57.144 "dma_device_type": 1 00:08:57.144 }, 00:08:57.144 { 00:08:57.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.144 "dma_device_type": 2 00:08:57.144 }, 00:08:57.144 { 00:08:57.144 "dma_device_id": "system", 00:08:57.144 "dma_device_type": 1 00:08:57.144 }, 00:08:57.144 { 00:08:57.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.144 "dma_device_type": 2 00:08:57.144 } 00:08:57.144 ], 00:08:57.144 "driver_specific": { 00:08:57.144 "raid": { 00:08:57.144 "uuid": "8b294323-8f6b-4506-8359-6ef89dae7e2e", 00:08:57.144 "strip_size_kb": 64, 00:08:57.144 "state": "online", 00:08:57.144 "raid_level": "raid0", 00:08:57.144 "superblock": false, 00:08:57.144 "num_base_bdevs": 3, 00:08:57.144 "num_base_bdevs_discovered": 3, 00:08:57.144 "num_base_bdevs_operational": 3, 00:08:57.144 "base_bdevs_list": [ 00:08:57.144 { 00:08:57.144 "name": "BaseBdev1", 00:08:57.144 "uuid": "a177c47e-bdc1-440a-ae81-2c12c53ede2b", 00:08:57.144 "is_configured": true, 00:08:57.144 "data_offset": 0, 00:08:57.144 "data_size": 65536 00:08:57.144 }, 00:08:57.144 { 00:08:57.144 "name": "BaseBdev2", 00:08:57.144 "uuid": "3652dc0c-3f61-447c-9376-3118fe3a4518", 00:08:57.144 "is_configured": true, 00:08:57.144 "data_offset": 0, 00:08:57.144 "data_size": 65536 00:08:57.144 }, 00:08:57.144 { 00:08:57.144 "name": "BaseBdev3", 00:08:57.144 "uuid": "f3564ebc-e461-4251-9132-b416a7d04f0c", 00:08:57.144 "is_configured": true, 00:08:57.144 "data_offset": 0, 00:08:57.144 "data_size": 65536 00:08:57.144 } 00:08:57.144 ] 00:08:57.144 } 00:08:57.144 } 00:08:57.144 }' 00:08:57.144 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:57.405 BaseBdev2 00:08:57.405 BaseBdev3' 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.405 02:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.405 [2024-10-13 02:23:16.050995] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:57.405 [2024-10-13 02:23:16.051068] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.405 [2024-10-13 02:23:16.051179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.405 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.666 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.666 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.666 "name": "Existed_Raid", 00:08:57.666 "uuid": "8b294323-8f6b-4506-8359-6ef89dae7e2e", 00:08:57.666 "strip_size_kb": 64, 00:08:57.666 "state": "offline", 00:08:57.666 "raid_level": "raid0", 00:08:57.666 "superblock": false, 00:08:57.666 "num_base_bdevs": 3, 00:08:57.666 "num_base_bdevs_discovered": 2, 00:08:57.666 "num_base_bdevs_operational": 2, 00:08:57.666 "base_bdevs_list": [ 00:08:57.666 { 00:08:57.666 "name": null, 00:08:57.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.666 "is_configured": false, 00:08:57.666 "data_offset": 0, 00:08:57.666 "data_size": 65536 00:08:57.666 }, 00:08:57.666 { 00:08:57.666 "name": "BaseBdev2", 00:08:57.666 "uuid": "3652dc0c-3f61-447c-9376-3118fe3a4518", 00:08:57.666 "is_configured": true, 00:08:57.666 "data_offset": 0, 00:08:57.666 "data_size": 65536 00:08:57.666 }, 00:08:57.666 { 00:08:57.666 "name": "BaseBdev3", 00:08:57.666 "uuid": "f3564ebc-e461-4251-9132-b416a7d04f0c", 00:08:57.666 "is_configured": true, 00:08:57.666 "data_offset": 0, 00:08:57.666 "data_size": 65536 00:08:57.666 } 00:08:57.666 ] 00:08:57.666 }' 00:08:57.666 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.666 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.926 [2024-10-13 02:23:16.575623] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.926 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.187 [2024-10-13 02:23:16.631031] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:58.187 [2024-10-13 02:23:16.631211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.187 BaseBdev2 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.187 [ 00:08:58.187 { 00:08:58.187 "name": "BaseBdev2", 00:08:58.187 "aliases": [ 00:08:58.187 "a8cef004-afbd-469f-b694-42997123c5ed" 00:08:58.187 ], 00:08:58.187 "product_name": "Malloc disk", 00:08:58.187 "block_size": 512, 00:08:58.187 "num_blocks": 65536, 00:08:58.187 "uuid": "a8cef004-afbd-469f-b694-42997123c5ed", 00:08:58.187 "assigned_rate_limits": { 00:08:58.187 "rw_ios_per_sec": 0, 00:08:58.187 "rw_mbytes_per_sec": 0, 00:08:58.187 "r_mbytes_per_sec": 0, 00:08:58.187 "w_mbytes_per_sec": 0 00:08:58.187 }, 00:08:58.187 "claimed": false, 00:08:58.187 "zoned": false, 00:08:58.187 "supported_io_types": { 00:08:58.187 "read": true, 00:08:58.187 "write": true, 00:08:58.187 "unmap": true, 00:08:58.187 "flush": true, 00:08:58.187 "reset": true, 00:08:58.187 "nvme_admin": false, 00:08:58.187 "nvme_io": false, 00:08:58.187 "nvme_io_md": false, 00:08:58.187 "write_zeroes": true, 00:08:58.187 "zcopy": true, 00:08:58.187 "get_zone_info": false, 00:08:58.187 "zone_management": false, 00:08:58.187 "zone_append": false, 00:08:58.187 "compare": false, 00:08:58.187 "compare_and_write": false, 00:08:58.187 "abort": true, 00:08:58.187 "seek_hole": false, 00:08:58.187 "seek_data": false, 00:08:58.187 "copy": true, 00:08:58.187 "nvme_iov_md": false 00:08:58.187 }, 00:08:58.187 "memory_domains": [ 00:08:58.187 { 00:08:58.187 "dma_device_id": "system", 00:08:58.187 "dma_device_type": 1 00:08:58.187 }, 00:08:58.187 { 00:08:58.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.187 "dma_device_type": 2 00:08:58.187 } 00:08:58.187 ], 00:08:58.187 "driver_specific": {} 00:08:58.187 } 00:08:58.187 ] 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:58.187 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.188 BaseBdev3 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.188 [ 00:08:58.188 { 00:08:58.188 "name": "BaseBdev3", 00:08:58.188 "aliases": [ 00:08:58.188 "4c3b66af-dd1c-42f2-a965-8baf7731452e" 00:08:58.188 ], 00:08:58.188 "product_name": "Malloc disk", 00:08:58.188 "block_size": 512, 00:08:58.188 "num_blocks": 65536, 00:08:58.188 "uuid": "4c3b66af-dd1c-42f2-a965-8baf7731452e", 00:08:58.188 "assigned_rate_limits": { 00:08:58.188 "rw_ios_per_sec": 0, 00:08:58.188 "rw_mbytes_per_sec": 0, 00:08:58.188 "r_mbytes_per_sec": 0, 00:08:58.188 "w_mbytes_per_sec": 0 00:08:58.188 }, 00:08:58.188 "claimed": false, 00:08:58.188 "zoned": false, 00:08:58.188 "supported_io_types": { 00:08:58.188 "read": true, 00:08:58.188 "write": true, 00:08:58.188 "unmap": true, 00:08:58.188 "flush": true, 00:08:58.188 "reset": true, 00:08:58.188 "nvme_admin": false, 00:08:58.188 "nvme_io": false, 00:08:58.188 "nvme_io_md": false, 00:08:58.188 "write_zeroes": true, 00:08:58.188 "zcopy": true, 00:08:58.188 "get_zone_info": false, 00:08:58.188 "zone_management": false, 00:08:58.188 "zone_append": false, 00:08:58.188 "compare": false, 00:08:58.188 "compare_and_write": false, 00:08:58.188 "abort": true, 00:08:58.188 "seek_hole": false, 00:08:58.188 "seek_data": false, 00:08:58.188 "copy": true, 00:08:58.188 "nvme_iov_md": false 00:08:58.188 }, 00:08:58.188 "memory_domains": [ 00:08:58.188 { 00:08:58.188 "dma_device_id": "system", 00:08:58.188 "dma_device_type": 1 00:08:58.188 }, 00:08:58.188 { 00:08:58.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.188 "dma_device_type": 2 00:08:58.188 } 00:08:58.188 ], 00:08:58.188 "driver_specific": {} 00:08:58.188 } 00:08:58.188 ] 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.188 [2024-10-13 02:23:16.804960] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.188 [2024-10-13 02:23:16.805110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.188 [2024-10-13 02:23:16.805160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.188 [2024-10-13 02:23:16.807058] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.188 "name": "Existed_Raid", 00:08:58.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.188 "strip_size_kb": 64, 00:08:58.188 "state": "configuring", 00:08:58.188 "raid_level": "raid0", 00:08:58.188 "superblock": false, 00:08:58.188 "num_base_bdevs": 3, 00:08:58.188 "num_base_bdevs_discovered": 2, 00:08:58.188 "num_base_bdevs_operational": 3, 00:08:58.188 "base_bdevs_list": [ 00:08:58.188 { 00:08:58.188 "name": "BaseBdev1", 00:08:58.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.188 "is_configured": false, 00:08:58.188 "data_offset": 0, 00:08:58.188 "data_size": 0 00:08:58.188 }, 00:08:58.188 { 00:08:58.188 "name": "BaseBdev2", 00:08:58.188 "uuid": "a8cef004-afbd-469f-b694-42997123c5ed", 00:08:58.188 "is_configured": true, 00:08:58.188 "data_offset": 0, 00:08:58.188 "data_size": 65536 00:08:58.188 }, 00:08:58.188 { 00:08:58.188 "name": "BaseBdev3", 00:08:58.188 "uuid": "4c3b66af-dd1c-42f2-a965-8baf7731452e", 00:08:58.188 "is_configured": true, 00:08:58.188 "data_offset": 0, 00:08:58.188 "data_size": 65536 00:08:58.188 } 00:08:58.188 ] 00:08:58.188 }' 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.188 02:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.758 [2024-10-13 02:23:17.264072] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.758 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.758 "name": "Existed_Raid", 00:08:58.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.758 "strip_size_kb": 64, 00:08:58.758 "state": "configuring", 00:08:58.758 "raid_level": "raid0", 00:08:58.759 "superblock": false, 00:08:58.759 "num_base_bdevs": 3, 00:08:58.759 "num_base_bdevs_discovered": 1, 00:08:58.759 "num_base_bdevs_operational": 3, 00:08:58.759 "base_bdevs_list": [ 00:08:58.759 { 00:08:58.759 "name": "BaseBdev1", 00:08:58.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.759 "is_configured": false, 00:08:58.759 "data_offset": 0, 00:08:58.759 "data_size": 0 00:08:58.759 }, 00:08:58.759 { 00:08:58.759 "name": null, 00:08:58.759 "uuid": "a8cef004-afbd-469f-b694-42997123c5ed", 00:08:58.759 "is_configured": false, 00:08:58.759 "data_offset": 0, 00:08:58.759 "data_size": 65536 00:08:58.759 }, 00:08:58.759 { 00:08:58.759 "name": "BaseBdev3", 00:08:58.759 "uuid": "4c3b66af-dd1c-42f2-a965-8baf7731452e", 00:08:58.759 "is_configured": true, 00:08:58.759 "data_offset": 0, 00:08:58.759 "data_size": 65536 00:08:58.759 } 00:08:58.759 ] 00:08:58.759 }' 00:08:58.759 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.759 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.328 [2024-10-13 02:23:17.766059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.328 BaseBdev1 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.328 [ 00:08:59.328 { 00:08:59.328 "name": "BaseBdev1", 00:08:59.328 "aliases": [ 00:08:59.328 "d2fbe127-7112-4322-a46d-672ad95d8f58" 00:08:59.328 ], 00:08:59.328 "product_name": "Malloc disk", 00:08:59.328 "block_size": 512, 00:08:59.328 "num_blocks": 65536, 00:08:59.328 "uuid": "d2fbe127-7112-4322-a46d-672ad95d8f58", 00:08:59.328 "assigned_rate_limits": { 00:08:59.328 "rw_ios_per_sec": 0, 00:08:59.328 "rw_mbytes_per_sec": 0, 00:08:59.328 "r_mbytes_per_sec": 0, 00:08:59.328 "w_mbytes_per_sec": 0 00:08:59.328 }, 00:08:59.328 "claimed": true, 00:08:59.328 "claim_type": "exclusive_write", 00:08:59.328 "zoned": false, 00:08:59.328 "supported_io_types": { 00:08:59.328 "read": true, 00:08:59.328 "write": true, 00:08:59.328 "unmap": true, 00:08:59.328 "flush": true, 00:08:59.328 "reset": true, 00:08:59.328 "nvme_admin": false, 00:08:59.328 "nvme_io": false, 00:08:59.328 "nvme_io_md": false, 00:08:59.328 "write_zeroes": true, 00:08:59.328 "zcopy": true, 00:08:59.328 "get_zone_info": false, 00:08:59.328 "zone_management": false, 00:08:59.328 "zone_append": false, 00:08:59.328 "compare": false, 00:08:59.328 "compare_and_write": false, 00:08:59.328 "abort": true, 00:08:59.328 "seek_hole": false, 00:08:59.328 "seek_data": false, 00:08:59.328 "copy": true, 00:08:59.328 "nvme_iov_md": false 00:08:59.328 }, 00:08:59.328 "memory_domains": [ 00:08:59.328 { 00:08:59.328 "dma_device_id": "system", 00:08:59.328 "dma_device_type": 1 00:08:59.328 }, 00:08:59.328 { 00:08:59.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.328 "dma_device_type": 2 00:08:59.328 } 00:08:59.328 ], 00:08:59.328 "driver_specific": {} 00:08:59.328 } 00:08:59.328 ] 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.328 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.329 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.329 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.329 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.329 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.329 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.329 "name": "Existed_Raid", 00:08:59.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.329 "strip_size_kb": 64, 00:08:59.329 "state": "configuring", 00:08:59.329 "raid_level": "raid0", 00:08:59.329 "superblock": false, 00:08:59.329 "num_base_bdevs": 3, 00:08:59.329 "num_base_bdevs_discovered": 2, 00:08:59.329 "num_base_bdevs_operational": 3, 00:08:59.329 "base_bdevs_list": [ 00:08:59.329 { 00:08:59.329 "name": "BaseBdev1", 00:08:59.329 "uuid": "d2fbe127-7112-4322-a46d-672ad95d8f58", 00:08:59.329 "is_configured": true, 00:08:59.329 "data_offset": 0, 00:08:59.329 "data_size": 65536 00:08:59.329 }, 00:08:59.329 { 00:08:59.329 "name": null, 00:08:59.329 "uuid": "a8cef004-afbd-469f-b694-42997123c5ed", 00:08:59.329 "is_configured": false, 00:08:59.329 "data_offset": 0, 00:08:59.329 "data_size": 65536 00:08:59.329 }, 00:08:59.329 { 00:08:59.329 "name": "BaseBdev3", 00:08:59.329 "uuid": "4c3b66af-dd1c-42f2-a965-8baf7731452e", 00:08:59.329 "is_configured": true, 00:08:59.329 "data_offset": 0, 00:08:59.329 "data_size": 65536 00:08:59.329 } 00:08:59.329 ] 00:08:59.329 }' 00:08:59.329 02:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.329 02:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.588 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.588 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:59.588 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.588 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.588 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.848 [2024-10-13 02:23:18.297243] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.848 "name": "Existed_Raid", 00:08:59.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.848 "strip_size_kb": 64, 00:08:59.848 "state": "configuring", 00:08:59.848 "raid_level": "raid0", 00:08:59.848 "superblock": false, 00:08:59.848 "num_base_bdevs": 3, 00:08:59.848 "num_base_bdevs_discovered": 1, 00:08:59.848 "num_base_bdevs_operational": 3, 00:08:59.848 "base_bdevs_list": [ 00:08:59.848 { 00:08:59.848 "name": "BaseBdev1", 00:08:59.848 "uuid": "d2fbe127-7112-4322-a46d-672ad95d8f58", 00:08:59.848 "is_configured": true, 00:08:59.848 "data_offset": 0, 00:08:59.848 "data_size": 65536 00:08:59.848 }, 00:08:59.848 { 00:08:59.848 "name": null, 00:08:59.848 "uuid": "a8cef004-afbd-469f-b694-42997123c5ed", 00:08:59.848 "is_configured": false, 00:08:59.848 "data_offset": 0, 00:08:59.848 "data_size": 65536 00:08:59.848 }, 00:08:59.848 { 00:08:59.848 "name": null, 00:08:59.848 "uuid": "4c3b66af-dd1c-42f2-a965-8baf7731452e", 00:08:59.848 "is_configured": false, 00:08:59.848 "data_offset": 0, 00:08:59.848 "data_size": 65536 00:08:59.848 } 00:08:59.848 ] 00:08:59.848 }' 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.848 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.108 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.108 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:00.108 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.108 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.108 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.367 [2024-10-13 02:23:18.796417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.367 "name": "Existed_Raid", 00:09:00.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.367 "strip_size_kb": 64, 00:09:00.367 "state": "configuring", 00:09:00.367 "raid_level": "raid0", 00:09:00.367 "superblock": false, 00:09:00.367 "num_base_bdevs": 3, 00:09:00.367 "num_base_bdevs_discovered": 2, 00:09:00.367 "num_base_bdevs_operational": 3, 00:09:00.367 "base_bdevs_list": [ 00:09:00.367 { 00:09:00.367 "name": "BaseBdev1", 00:09:00.367 "uuid": "d2fbe127-7112-4322-a46d-672ad95d8f58", 00:09:00.367 "is_configured": true, 00:09:00.367 "data_offset": 0, 00:09:00.367 "data_size": 65536 00:09:00.367 }, 00:09:00.367 { 00:09:00.367 "name": null, 00:09:00.367 "uuid": "a8cef004-afbd-469f-b694-42997123c5ed", 00:09:00.367 "is_configured": false, 00:09:00.367 "data_offset": 0, 00:09:00.367 "data_size": 65536 00:09:00.367 }, 00:09:00.367 { 00:09:00.367 "name": "BaseBdev3", 00:09:00.367 "uuid": "4c3b66af-dd1c-42f2-a965-8baf7731452e", 00:09:00.367 "is_configured": true, 00:09:00.367 "data_offset": 0, 00:09:00.367 "data_size": 65536 00:09:00.367 } 00:09:00.367 ] 00:09:00.367 }' 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.367 02:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.626 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:00.626 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.626 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.626 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.626 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.626 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:00.626 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:00.626 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.626 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.626 [2024-10-13 02:23:19.303615] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.885 "name": "Existed_Raid", 00:09:00.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.885 "strip_size_kb": 64, 00:09:00.885 "state": "configuring", 00:09:00.885 "raid_level": "raid0", 00:09:00.885 "superblock": false, 00:09:00.885 "num_base_bdevs": 3, 00:09:00.885 "num_base_bdevs_discovered": 1, 00:09:00.885 "num_base_bdevs_operational": 3, 00:09:00.885 "base_bdevs_list": [ 00:09:00.885 { 00:09:00.885 "name": null, 00:09:00.885 "uuid": "d2fbe127-7112-4322-a46d-672ad95d8f58", 00:09:00.885 "is_configured": false, 00:09:00.885 "data_offset": 0, 00:09:00.885 "data_size": 65536 00:09:00.885 }, 00:09:00.885 { 00:09:00.885 "name": null, 00:09:00.885 "uuid": "a8cef004-afbd-469f-b694-42997123c5ed", 00:09:00.885 "is_configured": false, 00:09:00.885 "data_offset": 0, 00:09:00.885 "data_size": 65536 00:09:00.885 }, 00:09:00.885 { 00:09:00.885 "name": "BaseBdev3", 00:09:00.885 "uuid": "4c3b66af-dd1c-42f2-a965-8baf7731452e", 00:09:00.885 "is_configured": true, 00:09:00.885 "data_offset": 0, 00:09:00.885 "data_size": 65536 00:09:00.885 } 00:09:00.885 ] 00:09:00.885 }' 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.885 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.144 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:01.144 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.144 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.144 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.145 [2024-10-13 02:23:19.773107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.145 "name": "Existed_Raid", 00:09:01.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.145 "strip_size_kb": 64, 00:09:01.145 "state": "configuring", 00:09:01.145 "raid_level": "raid0", 00:09:01.145 "superblock": false, 00:09:01.145 "num_base_bdevs": 3, 00:09:01.145 "num_base_bdevs_discovered": 2, 00:09:01.145 "num_base_bdevs_operational": 3, 00:09:01.145 "base_bdevs_list": [ 00:09:01.145 { 00:09:01.145 "name": null, 00:09:01.145 "uuid": "d2fbe127-7112-4322-a46d-672ad95d8f58", 00:09:01.145 "is_configured": false, 00:09:01.145 "data_offset": 0, 00:09:01.145 "data_size": 65536 00:09:01.145 }, 00:09:01.145 { 00:09:01.145 "name": "BaseBdev2", 00:09:01.145 "uuid": "a8cef004-afbd-469f-b694-42997123c5ed", 00:09:01.145 "is_configured": true, 00:09:01.145 "data_offset": 0, 00:09:01.145 "data_size": 65536 00:09:01.145 }, 00:09:01.145 { 00:09:01.145 "name": "BaseBdev3", 00:09:01.145 "uuid": "4c3b66af-dd1c-42f2-a965-8baf7731452e", 00:09:01.145 "is_configured": true, 00:09:01.145 "data_offset": 0, 00:09:01.145 "data_size": 65536 00:09:01.145 } 00:09:01.145 ] 00:09:01.145 }' 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.145 02:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.725 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.725 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.725 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.725 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:01.725 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.725 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:01.725 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.725 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:01.725 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.725 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.725 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d2fbe127-7112-4322-a46d-672ad95d8f58 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.726 [2024-10-13 02:23:20.343497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:01.726 [2024-10-13 02:23:20.343645] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:01.726 [2024-10-13 02:23:20.343675] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:01.726 [2024-10-13 02:23:20.343969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:01.726 [2024-10-13 02:23:20.344143] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:01.726 [2024-10-13 02:23:20.344185] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:01.726 [2024-10-13 02:23:20.344438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.726 NewBaseBdev 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.726 [ 00:09:01.726 { 00:09:01.726 "name": "NewBaseBdev", 00:09:01.726 "aliases": [ 00:09:01.726 "d2fbe127-7112-4322-a46d-672ad95d8f58" 00:09:01.726 ], 00:09:01.726 "product_name": "Malloc disk", 00:09:01.726 "block_size": 512, 00:09:01.726 "num_blocks": 65536, 00:09:01.726 "uuid": "d2fbe127-7112-4322-a46d-672ad95d8f58", 00:09:01.726 "assigned_rate_limits": { 00:09:01.726 "rw_ios_per_sec": 0, 00:09:01.726 "rw_mbytes_per_sec": 0, 00:09:01.726 "r_mbytes_per_sec": 0, 00:09:01.726 "w_mbytes_per_sec": 0 00:09:01.726 }, 00:09:01.726 "claimed": true, 00:09:01.726 "claim_type": "exclusive_write", 00:09:01.726 "zoned": false, 00:09:01.726 "supported_io_types": { 00:09:01.726 "read": true, 00:09:01.726 "write": true, 00:09:01.726 "unmap": true, 00:09:01.726 "flush": true, 00:09:01.726 "reset": true, 00:09:01.726 "nvme_admin": false, 00:09:01.726 "nvme_io": false, 00:09:01.726 "nvme_io_md": false, 00:09:01.726 "write_zeroes": true, 00:09:01.726 "zcopy": true, 00:09:01.726 "get_zone_info": false, 00:09:01.726 "zone_management": false, 00:09:01.726 "zone_append": false, 00:09:01.726 "compare": false, 00:09:01.726 "compare_and_write": false, 00:09:01.726 "abort": true, 00:09:01.726 "seek_hole": false, 00:09:01.726 "seek_data": false, 00:09:01.726 "copy": true, 00:09:01.726 "nvme_iov_md": false 00:09:01.726 }, 00:09:01.726 "memory_domains": [ 00:09:01.726 { 00:09:01.726 "dma_device_id": "system", 00:09:01.726 "dma_device_type": 1 00:09:01.726 }, 00:09:01.726 { 00:09:01.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.726 "dma_device_type": 2 00:09:01.726 } 00:09:01.726 ], 00:09:01.726 "driver_specific": {} 00:09:01.726 } 00:09:01.726 ] 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.726 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.988 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.988 "name": "Existed_Raid", 00:09:01.988 "uuid": "69ccc289-1ea8-4aa4-a742-34b2604250a7", 00:09:01.988 "strip_size_kb": 64, 00:09:01.988 "state": "online", 00:09:01.988 "raid_level": "raid0", 00:09:01.988 "superblock": false, 00:09:01.988 "num_base_bdevs": 3, 00:09:01.988 "num_base_bdevs_discovered": 3, 00:09:01.988 "num_base_bdevs_operational": 3, 00:09:01.988 "base_bdevs_list": [ 00:09:01.988 { 00:09:01.988 "name": "NewBaseBdev", 00:09:01.988 "uuid": "d2fbe127-7112-4322-a46d-672ad95d8f58", 00:09:01.988 "is_configured": true, 00:09:01.988 "data_offset": 0, 00:09:01.988 "data_size": 65536 00:09:01.988 }, 00:09:01.988 { 00:09:01.988 "name": "BaseBdev2", 00:09:01.988 "uuid": "a8cef004-afbd-469f-b694-42997123c5ed", 00:09:01.988 "is_configured": true, 00:09:01.988 "data_offset": 0, 00:09:01.988 "data_size": 65536 00:09:01.988 }, 00:09:01.988 { 00:09:01.988 "name": "BaseBdev3", 00:09:01.988 "uuid": "4c3b66af-dd1c-42f2-a965-8baf7731452e", 00:09:01.988 "is_configured": true, 00:09:01.988 "data_offset": 0, 00:09:01.988 "data_size": 65536 00:09:01.988 } 00:09:01.988 ] 00:09:01.988 }' 00:09:01.988 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.988 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.247 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:02.247 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:02.247 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:02.247 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:02.247 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:02.247 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:02.247 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:02.247 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:02.247 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.247 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.247 [2024-10-13 02:23:20.831367] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.247 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.247 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:02.247 "name": "Existed_Raid", 00:09:02.247 "aliases": [ 00:09:02.247 "69ccc289-1ea8-4aa4-a742-34b2604250a7" 00:09:02.247 ], 00:09:02.247 "product_name": "Raid Volume", 00:09:02.247 "block_size": 512, 00:09:02.247 "num_blocks": 196608, 00:09:02.247 "uuid": "69ccc289-1ea8-4aa4-a742-34b2604250a7", 00:09:02.247 "assigned_rate_limits": { 00:09:02.247 "rw_ios_per_sec": 0, 00:09:02.247 "rw_mbytes_per_sec": 0, 00:09:02.247 "r_mbytes_per_sec": 0, 00:09:02.247 "w_mbytes_per_sec": 0 00:09:02.247 }, 00:09:02.247 "claimed": false, 00:09:02.247 "zoned": false, 00:09:02.247 "supported_io_types": { 00:09:02.247 "read": true, 00:09:02.247 "write": true, 00:09:02.247 "unmap": true, 00:09:02.247 "flush": true, 00:09:02.247 "reset": true, 00:09:02.247 "nvme_admin": false, 00:09:02.248 "nvme_io": false, 00:09:02.248 "nvme_io_md": false, 00:09:02.248 "write_zeroes": true, 00:09:02.248 "zcopy": false, 00:09:02.248 "get_zone_info": false, 00:09:02.248 "zone_management": false, 00:09:02.248 "zone_append": false, 00:09:02.248 "compare": false, 00:09:02.248 "compare_and_write": false, 00:09:02.248 "abort": false, 00:09:02.248 "seek_hole": false, 00:09:02.248 "seek_data": false, 00:09:02.248 "copy": false, 00:09:02.248 "nvme_iov_md": false 00:09:02.248 }, 00:09:02.248 "memory_domains": [ 00:09:02.248 { 00:09:02.248 "dma_device_id": "system", 00:09:02.248 "dma_device_type": 1 00:09:02.248 }, 00:09:02.248 { 00:09:02.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.248 "dma_device_type": 2 00:09:02.248 }, 00:09:02.248 { 00:09:02.248 "dma_device_id": "system", 00:09:02.248 "dma_device_type": 1 00:09:02.248 }, 00:09:02.248 { 00:09:02.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.248 "dma_device_type": 2 00:09:02.248 }, 00:09:02.248 { 00:09:02.248 "dma_device_id": "system", 00:09:02.248 "dma_device_type": 1 00:09:02.248 }, 00:09:02.248 { 00:09:02.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.248 "dma_device_type": 2 00:09:02.248 } 00:09:02.248 ], 00:09:02.248 "driver_specific": { 00:09:02.248 "raid": { 00:09:02.248 "uuid": "69ccc289-1ea8-4aa4-a742-34b2604250a7", 00:09:02.248 "strip_size_kb": 64, 00:09:02.248 "state": "online", 00:09:02.248 "raid_level": "raid0", 00:09:02.248 "superblock": false, 00:09:02.248 "num_base_bdevs": 3, 00:09:02.248 "num_base_bdevs_discovered": 3, 00:09:02.248 "num_base_bdevs_operational": 3, 00:09:02.248 "base_bdevs_list": [ 00:09:02.248 { 00:09:02.248 "name": "NewBaseBdev", 00:09:02.248 "uuid": "d2fbe127-7112-4322-a46d-672ad95d8f58", 00:09:02.248 "is_configured": true, 00:09:02.248 "data_offset": 0, 00:09:02.248 "data_size": 65536 00:09:02.248 }, 00:09:02.248 { 00:09:02.248 "name": "BaseBdev2", 00:09:02.248 "uuid": "a8cef004-afbd-469f-b694-42997123c5ed", 00:09:02.248 "is_configured": true, 00:09:02.248 "data_offset": 0, 00:09:02.248 "data_size": 65536 00:09:02.248 }, 00:09:02.248 { 00:09:02.248 "name": "BaseBdev3", 00:09:02.248 "uuid": "4c3b66af-dd1c-42f2-a965-8baf7731452e", 00:09:02.248 "is_configured": true, 00:09:02.248 "data_offset": 0, 00:09:02.248 "data_size": 65536 00:09:02.248 } 00:09:02.248 ] 00:09:02.248 } 00:09:02.248 } 00:09:02.248 }' 00:09:02.248 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:02.248 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:02.248 BaseBdev2 00:09:02.248 BaseBdev3' 00:09:02.248 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.508 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:02.508 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.508 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:02.508 02:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.508 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.508 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.508 02:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.508 [2024-10-13 02:23:21.114557] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:02.508 [2024-10-13 02:23:21.114635] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.508 [2024-10-13 02:23:21.114739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.508 [2024-10-13 02:23:21.114810] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.508 [2024-10-13 02:23:21.114847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74915 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 74915 ']' 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 74915 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74915 00:09:02.508 killing process with pid 74915 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74915' 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 74915 00:09:02.508 [2024-10-13 02:23:21.164323] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.508 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 74915 00:09:02.768 [2024-10-13 02:23:21.195011] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.768 ************************************ 00:09:02.768 END TEST raid_state_function_test 00:09:02.768 02:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:02.768 00:09:02.768 real 0m9.109s 00:09:02.768 user 0m15.431s 00:09:02.768 sys 0m1.899s 00:09:02.768 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.768 02:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.768 ************************************ 00:09:03.059 02:23:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:03.059 02:23:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:03.059 02:23:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.059 02:23:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.059 ************************************ 00:09:03.059 START TEST raid_state_function_test_sb 00:09:03.059 ************************************ 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:03.059 Process raid pid: 75520 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75520 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75520' 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75520 00:09:03.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75520 ']' 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.059 02:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.060 02:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.060 [2024-10-13 02:23:21.623116] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:03.060 [2024-10-13 02:23:21.623266] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.320 [2024-10-13 02:23:21.773584] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.320 [2024-10-13 02:23:21.827763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.320 [2024-10-13 02:23:21.869581] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.320 [2024-10-13 02:23:21.869619] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.890 [2024-10-13 02:23:22.486715] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:03.890 [2024-10-13 02:23:22.486878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:03.890 [2024-10-13 02:23:22.486929] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.890 [2024-10-13 02:23:22.486955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.890 [2024-10-13 02:23:22.486973] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:03.890 [2024-10-13 02:23:22.486995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.890 "name": "Existed_Raid", 00:09:03.890 "uuid": "61edbcc6-f850-4e1e-8c91-12fe6e31b541", 00:09:03.890 "strip_size_kb": 64, 00:09:03.890 "state": "configuring", 00:09:03.890 "raid_level": "raid0", 00:09:03.890 "superblock": true, 00:09:03.890 "num_base_bdevs": 3, 00:09:03.890 "num_base_bdevs_discovered": 0, 00:09:03.890 "num_base_bdevs_operational": 3, 00:09:03.890 "base_bdevs_list": [ 00:09:03.890 { 00:09:03.890 "name": "BaseBdev1", 00:09:03.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.890 "is_configured": false, 00:09:03.890 "data_offset": 0, 00:09:03.890 "data_size": 0 00:09:03.890 }, 00:09:03.890 { 00:09:03.890 "name": "BaseBdev2", 00:09:03.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.890 "is_configured": false, 00:09:03.890 "data_offset": 0, 00:09:03.890 "data_size": 0 00:09:03.890 }, 00:09:03.890 { 00:09:03.890 "name": "BaseBdev3", 00:09:03.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.890 "is_configured": false, 00:09:03.890 "data_offset": 0, 00:09:03.890 "data_size": 0 00:09:03.890 } 00:09:03.890 ] 00:09:03.890 }' 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.890 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.461 [2024-10-13 02:23:22.913839] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:04.461 [2024-10-13 02:23:22.914010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.461 [2024-10-13 02:23:22.925821] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.461 [2024-10-13 02:23:22.925932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.461 [2024-10-13 02:23:22.925960] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.461 [2024-10-13 02:23:22.925981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.461 [2024-10-13 02:23:22.926000] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:04.461 [2024-10-13 02:23:22.926020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.461 [2024-10-13 02:23:22.946660] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.461 BaseBdev1 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.461 [ 00:09:04.461 { 00:09:04.461 "name": "BaseBdev1", 00:09:04.461 "aliases": [ 00:09:04.461 "dc7b29e4-ccac-412f-9c73-89a601fba81e" 00:09:04.461 ], 00:09:04.461 "product_name": "Malloc disk", 00:09:04.461 "block_size": 512, 00:09:04.461 "num_blocks": 65536, 00:09:04.461 "uuid": "dc7b29e4-ccac-412f-9c73-89a601fba81e", 00:09:04.461 "assigned_rate_limits": { 00:09:04.461 "rw_ios_per_sec": 0, 00:09:04.461 "rw_mbytes_per_sec": 0, 00:09:04.461 "r_mbytes_per_sec": 0, 00:09:04.461 "w_mbytes_per_sec": 0 00:09:04.461 }, 00:09:04.461 "claimed": true, 00:09:04.461 "claim_type": "exclusive_write", 00:09:04.461 "zoned": false, 00:09:04.461 "supported_io_types": { 00:09:04.461 "read": true, 00:09:04.461 "write": true, 00:09:04.461 "unmap": true, 00:09:04.461 "flush": true, 00:09:04.461 "reset": true, 00:09:04.461 "nvme_admin": false, 00:09:04.461 "nvme_io": false, 00:09:04.461 "nvme_io_md": false, 00:09:04.461 "write_zeroes": true, 00:09:04.461 "zcopy": true, 00:09:04.461 "get_zone_info": false, 00:09:04.461 "zone_management": false, 00:09:04.461 "zone_append": false, 00:09:04.461 "compare": false, 00:09:04.461 "compare_and_write": false, 00:09:04.461 "abort": true, 00:09:04.461 "seek_hole": false, 00:09:04.461 "seek_data": false, 00:09:04.461 "copy": true, 00:09:04.461 "nvme_iov_md": false 00:09:04.461 }, 00:09:04.461 "memory_domains": [ 00:09:04.461 { 00:09:04.461 "dma_device_id": "system", 00:09:04.461 "dma_device_type": 1 00:09:04.461 }, 00:09:04.461 { 00:09:04.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.461 "dma_device_type": 2 00:09:04.461 } 00:09:04.461 ], 00:09:04.461 "driver_specific": {} 00:09:04.461 } 00:09:04.461 ] 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.461 02:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.461 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.461 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.461 "name": "Existed_Raid", 00:09:04.461 "uuid": "1dfce2fa-dcc3-4a7e-bf47-ac622cb06b18", 00:09:04.461 "strip_size_kb": 64, 00:09:04.461 "state": "configuring", 00:09:04.461 "raid_level": "raid0", 00:09:04.461 "superblock": true, 00:09:04.461 "num_base_bdevs": 3, 00:09:04.461 "num_base_bdevs_discovered": 1, 00:09:04.461 "num_base_bdevs_operational": 3, 00:09:04.461 "base_bdevs_list": [ 00:09:04.461 { 00:09:04.461 "name": "BaseBdev1", 00:09:04.461 "uuid": "dc7b29e4-ccac-412f-9c73-89a601fba81e", 00:09:04.461 "is_configured": true, 00:09:04.461 "data_offset": 2048, 00:09:04.461 "data_size": 63488 00:09:04.461 }, 00:09:04.461 { 00:09:04.461 "name": "BaseBdev2", 00:09:04.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.461 "is_configured": false, 00:09:04.461 "data_offset": 0, 00:09:04.461 "data_size": 0 00:09:04.461 }, 00:09:04.461 { 00:09:04.461 "name": "BaseBdev3", 00:09:04.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.461 "is_configured": false, 00:09:04.461 "data_offset": 0, 00:09:04.461 "data_size": 0 00:09:04.461 } 00:09:04.461 ] 00:09:04.461 }' 00:09:04.461 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.461 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.032 [2024-10-13 02:23:23.429923] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.032 [2024-10-13 02:23:23.430064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.032 [2024-10-13 02:23:23.441961] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.032 [2024-10-13 02:23:23.443782] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.032 [2024-10-13 02:23:23.443860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.032 [2024-10-13 02:23:23.443897] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.032 [2024-10-13 02:23:23.443922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.032 "name": "Existed_Raid", 00:09:05.032 "uuid": "5e207ffb-6752-4062-aa57-5aafcf680a9e", 00:09:05.032 "strip_size_kb": 64, 00:09:05.032 "state": "configuring", 00:09:05.032 "raid_level": "raid0", 00:09:05.032 "superblock": true, 00:09:05.032 "num_base_bdevs": 3, 00:09:05.032 "num_base_bdevs_discovered": 1, 00:09:05.032 "num_base_bdevs_operational": 3, 00:09:05.032 "base_bdevs_list": [ 00:09:05.032 { 00:09:05.032 "name": "BaseBdev1", 00:09:05.032 "uuid": "dc7b29e4-ccac-412f-9c73-89a601fba81e", 00:09:05.032 "is_configured": true, 00:09:05.032 "data_offset": 2048, 00:09:05.032 "data_size": 63488 00:09:05.032 }, 00:09:05.032 { 00:09:05.032 "name": "BaseBdev2", 00:09:05.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.032 "is_configured": false, 00:09:05.032 "data_offset": 0, 00:09:05.032 "data_size": 0 00:09:05.032 }, 00:09:05.032 { 00:09:05.032 "name": "BaseBdev3", 00:09:05.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.032 "is_configured": false, 00:09:05.032 "data_offset": 0, 00:09:05.032 "data_size": 0 00:09:05.032 } 00:09:05.032 ] 00:09:05.032 }' 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.032 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.292 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:05.292 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.292 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.292 [2024-10-13 02:23:23.904376] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.292 BaseBdev2 00:09:05.292 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.292 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:05.292 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:05.292 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:05.292 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:05.292 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:05.292 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:05.292 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:05.292 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.292 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.292 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.292 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:05.292 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.292 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.292 [ 00:09:05.292 { 00:09:05.292 "name": "BaseBdev2", 00:09:05.292 "aliases": [ 00:09:05.292 "ee113fbd-808d-4d3d-bc2e-da35c4ffbee6" 00:09:05.292 ], 00:09:05.292 "product_name": "Malloc disk", 00:09:05.292 "block_size": 512, 00:09:05.292 "num_blocks": 65536, 00:09:05.292 "uuid": "ee113fbd-808d-4d3d-bc2e-da35c4ffbee6", 00:09:05.292 "assigned_rate_limits": { 00:09:05.292 "rw_ios_per_sec": 0, 00:09:05.292 "rw_mbytes_per_sec": 0, 00:09:05.292 "r_mbytes_per_sec": 0, 00:09:05.292 "w_mbytes_per_sec": 0 00:09:05.292 }, 00:09:05.292 "claimed": true, 00:09:05.292 "claim_type": "exclusive_write", 00:09:05.292 "zoned": false, 00:09:05.292 "supported_io_types": { 00:09:05.292 "read": true, 00:09:05.292 "write": true, 00:09:05.292 "unmap": true, 00:09:05.292 "flush": true, 00:09:05.292 "reset": true, 00:09:05.292 "nvme_admin": false, 00:09:05.292 "nvme_io": false, 00:09:05.292 "nvme_io_md": false, 00:09:05.292 "write_zeroes": true, 00:09:05.292 "zcopy": true, 00:09:05.292 "get_zone_info": false, 00:09:05.292 "zone_management": false, 00:09:05.292 "zone_append": false, 00:09:05.292 "compare": false, 00:09:05.292 "compare_and_write": false, 00:09:05.292 "abort": true, 00:09:05.292 "seek_hole": false, 00:09:05.292 "seek_data": false, 00:09:05.292 "copy": true, 00:09:05.292 "nvme_iov_md": false 00:09:05.292 }, 00:09:05.292 "memory_domains": [ 00:09:05.292 { 00:09:05.292 "dma_device_id": "system", 00:09:05.292 "dma_device_type": 1 00:09:05.293 }, 00:09:05.293 { 00:09:05.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.293 "dma_device_type": 2 00:09:05.293 } 00:09:05.293 ], 00:09:05.293 "driver_specific": {} 00:09:05.293 } 00:09:05.293 ] 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.293 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.552 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.552 "name": "Existed_Raid", 00:09:05.552 "uuid": "5e207ffb-6752-4062-aa57-5aafcf680a9e", 00:09:05.552 "strip_size_kb": 64, 00:09:05.552 "state": "configuring", 00:09:05.552 "raid_level": "raid0", 00:09:05.552 "superblock": true, 00:09:05.552 "num_base_bdevs": 3, 00:09:05.552 "num_base_bdevs_discovered": 2, 00:09:05.552 "num_base_bdevs_operational": 3, 00:09:05.552 "base_bdevs_list": [ 00:09:05.552 { 00:09:05.552 "name": "BaseBdev1", 00:09:05.552 "uuid": "dc7b29e4-ccac-412f-9c73-89a601fba81e", 00:09:05.552 "is_configured": true, 00:09:05.552 "data_offset": 2048, 00:09:05.552 "data_size": 63488 00:09:05.552 }, 00:09:05.552 { 00:09:05.552 "name": "BaseBdev2", 00:09:05.552 "uuid": "ee113fbd-808d-4d3d-bc2e-da35c4ffbee6", 00:09:05.552 "is_configured": true, 00:09:05.552 "data_offset": 2048, 00:09:05.552 "data_size": 63488 00:09:05.552 }, 00:09:05.552 { 00:09:05.552 "name": "BaseBdev3", 00:09:05.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.552 "is_configured": false, 00:09:05.552 "data_offset": 0, 00:09:05.552 "data_size": 0 00:09:05.552 } 00:09:05.552 ] 00:09:05.552 }' 00:09:05.552 02:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.552 02:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.813 [2024-10-13 02:23:24.422772] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:05.813 [2024-10-13 02:23:24.423107] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:05.813 [2024-10-13 02:23:24.423180] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:05.813 [2024-10-13 02:23:24.423497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:05.813 [2024-10-13 02:23:24.423657] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:05.813 BaseBdev3 00:09:05.813 [2024-10-13 02:23:24.423708] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:05.813 [2024-10-13 02:23:24.423893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.813 [ 00:09:05.813 { 00:09:05.813 "name": "BaseBdev3", 00:09:05.813 "aliases": [ 00:09:05.813 "aa2dec0f-60e1-4e71-8949-8749928b29eb" 00:09:05.813 ], 00:09:05.813 "product_name": "Malloc disk", 00:09:05.813 "block_size": 512, 00:09:05.813 "num_blocks": 65536, 00:09:05.813 "uuid": "aa2dec0f-60e1-4e71-8949-8749928b29eb", 00:09:05.813 "assigned_rate_limits": { 00:09:05.813 "rw_ios_per_sec": 0, 00:09:05.813 "rw_mbytes_per_sec": 0, 00:09:05.813 "r_mbytes_per_sec": 0, 00:09:05.813 "w_mbytes_per_sec": 0 00:09:05.813 }, 00:09:05.813 "claimed": true, 00:09:05.813 "claim_type": "exclusive_write", 00:09:05.813 "zoned": false, 00:09:05.813 "supported_io_types": { 00:09:05.813 "read": true, 00:09:05.813 "write": true, 00:09:05.813 "unmap": true, 00:09:05.813 "flush": true, 00:09:05.813 "reset": true, 00:09:05.813 "nvme_admin": false, 00:09:05.813 "nvme_io": false, 00:09:05.813 "nvme_io_md": false, 00:09:05.813 "write_zeroes": true, 00:09:05.813 "zcopy": true, 00:09:05.813 "get_zone_info": false, 00:09:05.813 "zone_management": false, 00:09:05.813 "zone_append": false, 00:09:05.813 "compare": false, 00:09:05.813 "compare_and_write": false, 00:09:05.813 "abort": true, 00:09:05.813 "seek_hole": false, 00:09:05.813 "seek_data": false, 00:09:05.813 "copy": true, 00:09:05.813 "nvme_iov_md": false 00:09:05.813 }, 00:09:05.813 "memory_domains": [ 00:09:05.813 { 00:09:05.813 "dma_device_id": "system", 00:09:05.813 "dma_device_type": 1 00:09:05.813 }, 00:09:05.813 { 00:09:05.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.813 "dma_device_type": 2 00:09:05.813 } 00:09:05.813 ], 00:09:05.813 "driver_specific": {} 00:09:05.813 } 00:09:05.813 ] 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.813 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.073 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.073 "name": "Existed_Raid", 00:09:06.073 "uuid": "5e207ffb-6752-4062-aa57-5aafcf680a9e", 00:09:06.073 "strip_size_kb": 64, 00:09:06.073 "state": "online", 00:09:06.073 "raid_level": "raid0", 00:09:06.073 "superblock": true, 00:09:06.073 "num_base_bdevs": 3, 00:09:06.073 "num_base_bdevs_discovered": 3, 00:09:06.073 "num_base_bdevs_operational": 3, 00:09:06.073 "base_bdevs_list": [ 00:09:06.073 { 00:09:06.073 "name": "BaseBdev1", 00:09:06.073 "uuid": "dc7b29e4-ccac-412f-9c73-89a601fba81e", 00:09:06.073 "is_configured": true, 00:09:06.073 "data_offset": 2048, 00:09:06.073 "data_size": 63488 00:09:06.073 }, 00:09:06.073 { 00:09:06.073 "name": "BaseBdev2", 00:09:06.073 "uuid": "ee113fbd-808d-4d3d-bc2e-da35c4ffbee6", 00:09:06.073 "is_configured": true, 00:09:06.073 "data_offset": 2048, 00:09:06.073 "data_size": 63488 00:09:06.073 }, 00:09:06.073 { 00:09:06.073 "name": "BaseBdev3", 00:09:06.073 "uuid": "aa2dec0f-60e1-4e71-8949-8749928b29eb", 00:09:06.073 "is_configured": true, 00:09:06.073 "data_offset": 2048, 00:09:06.073 "data_size": 63488 00:09:06.073 } 00:09:06.073 ] 00:09:06.073 }' 00:09:06.073 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.073 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.333 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:06.333 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:06.333 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:06.333 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:06.333 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:06.333 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:06.333 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:06.333 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.333 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:06.333 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.333 [2024-10-13 02:23:24.890340] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.333 02:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.333 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:06.333 "name": "Existed_Raid", 00:09:06.333 "aliases": [ 00:09:06.333 "5e207ffb-6752-4062-aa57-5aafcf680a9e" 00:09:06.333 ], 00:09:06.333 "product_name": "Raid Volume", 00:09:06.333 "block_size": 512, 00:09:06.333 "num_blocks": 190464, 00:09:06.333 "uuid": "5e207ffb-6752-4062-aa57-5aafcf680a9e", 00:09:06.333 "assigned_rate_limits": { 00:09:06.333 "rw_ios_per_sec": 0, 00:09:06.333 "rw_mbytes_per_sec": 0, 00:09:06.333 "r_mbytes_per_sec": 0, 00:09:06.333 "w_mbytes_per_sec": 0 00:09:06.333 }, 00:09:06.333 "claimed": false, 00:09:06.333 "zoned": false, 00:09:06.333 "supported_io_types": { 00:09:06.333 "read": true, 00:09:06.333 "write": true, 00:09:06.333 "unmap": true, 00:09:06.333 "flush": true, 00:09:06.333 "reset": true, 00:09:06.333 "nvme_admin": false, 00:09:06.333 "nvme_io": false, 00:09:06.333 "nvme_io_md": false, 00:09:06.333 "write_zeroes": true, 00:09:06.333 "zcopy": false, 00:09:06.333 "get_zone_info": false, 00:09:06.333 "zone_management": false, 00:09:06.333 "zone_append": false, 00:09:06.333 "compare": false, 00:09:06.333 "compare_and_write": false, 00:09:06.333 "abort": false, 00:09:06.333 "seek_hole": false, 00:09:06.333 "seek_data": false, 00:09:06.333 "copy": false, 00:09:06.333 "nvme_iov_md": false 00:09:06.333 }, 00:09:06.333 "memory_domains": [ 00:09:06.333 { 00:09:06.333 "dma_device_id": "system", 00:09:06.333 "dma_device_type": 1 00:09:06.333 }, 00:09:06.333 { 00:09:06.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.333 "dma_device_type": 2 00:09:06.333 }, 00:09:06.333 { 00:09:06.333 "dma_device_id": "system", 00:09:06.333 "dma_device_type": 1 00:09:06.333 }, 00:09:06.333 { 00:09:06.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.333 "dma_device_type": 2 00:09:06.333 }, 00:09:06.333 { 00:09:06.333 "dma_device_id": "system", 00:09:06.333 "dma_device_type": 1 00:09:06.333 }, 00:09:06.333 { 00:09:06.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.333 "dma_device_type": 2 00:09:06.333 } 00:09:06.333 ], 00:09:06.333 "driver_specific": { 00:09:06.333 "raid": { 00:09:06.333 "uuid": "5e207ffb-6752-4062-aa57-5aafcf680a9e", 00:09:06.333 "strip_size_kb": 64, 00:09:06.333 "state": "online", 00:09:06.333 "raid_level": "raid0", 00:09:06.333 "superblock": true, 00:09:06.333 "num_base_bdevs": 3, 00:09:06.333 "num_base_bdevs_discovered": 3, 00:09:06.333 "num_base_bdevs_operational": 3, 00:09:06.333 "base_bdevs_list": [ 00:09:06.333 { 00:09:06.333 "name": "BaseBdev1", 00:09:06.333 "uuid": "dc7b29e4-ccac-412f-9c73-89a601fba81e", 00:09:06.333 "is_configured": true, 00:09:06.333 "data_offset": 2048, 00:09:06.333 "data_size": 63488 00:09:06.333 }, 00:09:06.333 { 00:09:06.333 "name": "BaseBdev2", 00:09:06.333 "uuid": "ee113fbd-808d-4d3d-bc2e-da35c4ffbee6", 00:09:06.333 "is_configured": true, 00:09:06.333 "data_offset": 2048, 00:09:06.333 "data_size": 63488 00:09:06.333 }, 00:09:06.333 { 00:09:06.333 "name": "BaseBdev3", 00:09:06.333 "uuid": "aa2dec0f-60e1-4e71-8949-8749928b29eb", 00:09:06.333 "is_configured": true, 00:09:06.333 "data_offset": 2048, 00:09:06.333 "data_size": 63488 00:09:06.333 } 00:09:06.333 ] 00:09:06.333 } 00:09:06.333 } 00:09:06.333 }' 00:09:06.333 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.333 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:06.333 BaseBdev2 00:09:06.333 BaseBdev3' 00:09:06.333 02:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.593 [2024-10-13 02:23:25.161606] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:06.593 [2024-10-13 02:23:25.161709] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.593 [2024-10-13 02:23:25.161797] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.593 "name": "Existed_Raid", 00:09:06.593 "uuid": "5e207ffb-6752-4062-aa57-5aafcf680a9e", 00:09:06.593 "strip_size_kb": 64, 00:09:06.593 "state": "offline", 00:09:06.593 "raid_level": "raid0", 00:09:06.593 "superblock": true, 00:09:06.593 "num_base_bdevs": 3, 00:09:06.593 "num_base_bdevs_discovered": 2, 00:09:06.593 "num_base_bdevs_operational": 2, 00:09:06.593 "base_bdevs_list": [ 00:09:06.593 { 00:09:06.593 "name": null, 00:09:06.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.593 "is_configured": false, 00:09:06.593 "data_offset": 0, 00:09:06.593 "data_size": 63488 00:09:06.593 }, 00:09:06.593 { 00:09:06.593 "name": "BaseBdev2", 00:09:06.593 "uuid": "ee113fbd-808d-4d3d-bc2e-da35c4ffbee6", 00:09:06.593 "is_configured": true, 00:09:06.593 "data_offset": 2048, 00:09:06.593 "data_size": 63488 00:09:06.593 }, 00:09:06.593 { 00:09:06.593 "name": "BaseBdev3", 00:09:06.593 "uuid": "aa2dec0f-60e1-4e71-8949-8749928b29eb", 00:09:06.593 "is_configured": true, 00:09:06.593 "data_offset": 2048, 00:09:06.593 "data_size": 63488 00:09:06.593 } 00:09:06.593 ] 00:09:06.593 }' 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.593 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.162 [2024-10-13 02:23:25.640349] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.162 [2024-10-13 02:23:25.711626] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:07.162 [2024-10-13 02:23:25.711746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.162 BaseBdev2 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.162 [ 00:09:07.162 { 00:09:07.162 "name": "BaseBdev2", 00:09:07.162 "aliases": [ 00:09:07.162 "e3b36c71-1e52-449d-9dea-9291fe8b31a9" 00:09:07.162 ], 00:09:07.162 "product_name": "Malloc disk", 00:09:07.162 "block_size": 512, 00:09:07.162 "num_blocks": 65536, 00:09:07.162 "uuid": "e3b36c71-1e52-449d-9dea-9291fe8b31a9", 00:09:07.162 "assigned_rate_limits": { 00:09:07.162 "rw_ios_per_sec": 0, 00:09:07.162 "rw_mbytes_per_sec": 0, 00:09:07.162 "r_mbytes_per_sec": 0, 00:09:07.162 "w_mbytes_per_sec": 0 00:09:07.162 }, 00:09:07.162 "claimed": false, 00:09:07.162 "zoned": false, 00:09:07.162 "supported_io_types": { 00:09:07.162 "read": true, 00:09:07.162 "write": true, 00:09:07.162 "unmap": true, 00:09:07.162 "flush": true, 00:09:07.162 "reset": true, 00:09:07.162 "nvme_admin": false, 00:09:07.162 "nvme_io": false, 00:09:07.162 "nvme_io_md": false, 00:09:07.162 "write_zeroes": true, 00:09:07.162 "zcopy": true, 00:09:07.162 "get_zone_info": false, 00:09:07.162 "zone_management": false, 00:09:07.162 "zone_append": false, 00:09:07.162 "compare": false, 00:09:07.162 "compare_and_write": false, 00:09:07.162 "abort": true, 00:09:07.162 "seek_hole": false, 00:09:07.162 "seek_data": false, 00:09:07.162 "copy": true, 00:09:07.162 "nvme_iov_md": false 00:09:07.162 }, 00:09:07.162 "memory_domains": [ 00:09:07.162 { 00:09:07.162 "dma_device_id": "system", 00:09:07.162 "dma_device_type": 1 00:09:07.162 }, 00:09:07.162 { 00:09:07.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.162 "dma_device_type": 2 00:09:07.162 } 00:09:07.162 ], 00:09:07.162 "driver_specific": {} 00:09:07.162 } 00:09:07.162 ] 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.162 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.422 BaseBdev3 00:09:07.422 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.422 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:07.422 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:07.422 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:07.422 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:07.422 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:07.422 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:07.422 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:07.422 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.422 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.422 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.423 [ 00:09:07.423 { 00:09:07.423 "name": "BaseBdev3", 00:09:07.423 "aliases": [ 00:09:07.423 "0b984d12-cf7c-4cda-b81f-14ab281b7f2d" 00:09:07.423 ], 00:09:07.423 "product_name": "Malloc disk", 00:09:07.423 "block_size": 512, 00:09:07.423 "num_blocks": 65536, 00:09:07.423 "uuid": "0b984d12-cf7c-4cda-b81f-14ab281b7f2d", 00:09:07.423 "assigned_rate_limits": { 00:09:07.423 "rw_ios_per_sec": 0, 00:09:07.423 "rw_mbytes_per_sec": 0, 00:09:07.423 "r_mbytes_per_sec": 0, 00:09:07.423 "w_mbytes_per_sec": 0 00:09:07.423 }, 00:09:07.423 "claimed": false, 00:09:07.423 "zoned": false, 00:09:07.423 "supported_io_types": { 00:09:07.423 "read": true, 00:09:07.423 "write": true, 00:09:07.423 "unmap": true, 00:09:07.423 "flush": true, 00:09:07.423 "reset": true, 00:09:07.423 "nvme_admin": false, 00:09:07.423 "nvme_io": false, 00:09:07.423 "nvme_io_md": false, 00:09:07.423 "write_zeroes": true, 00:09:07.423 "zcopy": true, 00:09:07.423 "get_zone_info": false, 00:09:07.423 "zone_management": false, 00:09:07.423 "zone_append": false, 00:09:07.423 "compare": false, 00:09:07.423 "compare_and_write": false, 00:09:07.423 "abort": true, 00:09:07.423 "seek_hole": false, 00:09:07.423 "seek_data": false, 00:09:07.423 "copy": true, 00:09:07.423 "nvme_iov_md": false 00:09:07.423 }, 00:09:07.423 "memory_domains": [ 00:09:07.423 { 00:09:07.423 "dma_device_id": "system", 00:09:07.423 "dma_device_type": 1 00:09:07.423 }, 00:09:07.423 { 00:09:07.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.423 "dma_device_type": 2 00:09:07.423 } 00:09:07.423 ], 00:09:07.423 "driver_specific": {} 00:09:07.423 } 00:09:07.423 ] 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.423 [2024-10-13 02:23:25.892234] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:07.423 [2024-10-13 02:23:25.892371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:07.423 [2024-10-13 02:23:25.892414] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.423 [2024-10-13 02:23:25.894309] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.423 "name": "Existed_Raid", 00:09:07.423 "uuid": "4251fa36-e1aa-41cc-8a34-1635a13afb02", 00:09:07.423 "strip_size_kb": 64, 00:09:07.423 "state": "configuring", 00:09:07.423 "raid_level": "raid0", 00:09:07.423 "superblock": true, 00:09:07.423 "num_base_bdevs": 3, 00:09:07.423 "num_base_bdevs_discovered": 2, 00:09:07.423 "num_base_bdevs_operational": 3, 00:09:07.423 "base_bdevs_list": [ 00:09:07.423 { 00:09:07.423 "name": "BaseBdev1", 00:09:07.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.423 "is_configured": false, 00:09:07.423 "data_offset": 0, 00:09:07.423 "data_size": 0 00:09:07.423 }, 00:09:07.423 { 00:09:07.423 "name": "BaseBdev2", 00:09:07.423 "uuid": "e3b36c71-1e52-449d-9dea-9291fe8b31a9", 00:09:07.423 "is_configured": true, 00:09:07.423 "data_offset": 2048, 00:09:07.423 "data_size": 63488 00:09:07.423 }, 00:09:07.423 { 00:09:07.423 "name": "BaseBdev3", 00:09:07.423 "uuid": "0b984d12-cf7c-4cda-b81f-14ab281b7f2d", 00:09:07.423 "is_configured": true, 00:09:07.423 "data_offset": 2048, 00:09:07.423 "data_size": 63488 00:09:07.423 } 00:09:07.423 ] 00:09:07.423 }' 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.423 02:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.993 [2024-10-13 02:23:26.379459] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.993 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.993 "name": "Existed_Raid", 00:09:07.993 "uuid": "4251fa36-e1aa-41cc-8a34-1635a13afb02", 00:09:07.993 "strip_size_kb": 64, 00:09:07.993 "state": "configuring", 00:09:07.993 "raid_level": "raid0", 00:09:07.993 "superblock": true, 00:09:07.993 "num_base_bdevs": 3, 00:09:07.993 "num_base_bdevs_discovered": 1, 00:09:07.993 "num_base_bdevs_operational": 3, 00:09:07.993 "base_bdevs_list": [ 00:09:07.993 { 00:09:07.993 "name": "BaseBdev1", 00:09:07.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.993 "is_configured": false, 00:09:07.994 "data_offset": 0, 00:09:07.994 "data_size": 0 00:09:07.994 }, 00:09:07.994 { 00:09:07.994 "name": null, 00:09:07.994 "uuid": "e3b36c71-1e52-449d-9dea-9291fe8b31a9", 00:09:07.994 "is_configured": false, 00:09:07.994 "data_offset": 0, 00:09:07.994 "data_size": 63488 00:09:07.994 }, 00:09:07.994 { 00:09:07.994 "name": "BaseBdev3", 00:09:07.994 "uuid": "0b984d12-cf7c-4cda-b81f-14ab281b7f2d", 00:09:07.994 "is_configured": true, 00:09:07.994 "data_offset": 2048, 00:09:07.994 "data_size": 63488 00:09:07.994 } 00:09:07.994 ] 00:09:07.994 }' 00:09:07.994 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.994 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.254 [2024-10-13 02:23:26.885846] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.254 BaseBdev1 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.254 [ 00:09:08.254 { 00:09:08.254 "name": "BaseBdev1", 00:09:08.254 "aliases": [ 00:09:08.254 "718b474e-9107-478e-b394-c69f931c6a49" 00:09:08.254 ], 00:09:08.254 "product_name": "Malloc disk", 00:09:08.254 "block_size": 512, 00:09:08.254 "num_blocks": 65536, 00:09:08.254 "uuid": "718b474e-9107-478e-b394-c69f931c6a49", 00:09:08.254 "assigned_rate_limits": { 00:09:08.254 "rw_ios_per_sec": 0, 00:09:08.254 "rw_mbytes_per_sec": 0, 00:09:08.254 "r_mbytes_per_sec": 0, 00:09:08.254 "w_mbytes_per_sec": 0 00:09:08.254 }, 00:09:08.254 "claimed": true, 00:09:08.254 "claim_type": "exclusive_write", 00:09:08.254 "zoned": false, 00:09:08.254 "supported_io_types": { 00:09:08.254 "read": true, 00:09:08.254 "write": true, 00:09:08.254 "unmap": true, 00:09:08.254 "flush": true, 00:09:08.254 "reset": true, 00:09:08.254 "nvme_admin": false, 00:09:08.254 "nvme_io": false, 00:09:08.254 "nvme_io_md": false, 00:09:08.254 "write_zeroes": true, 00:09:08.254 "zcopy": true, 00:09:08.254 "get_zone_info": false, 00:09:08.254 "zone_management": false, 00:09:08.254 "zone_append": false, 00:09:08.254 "compare": false, 00:09:08.254 "compare_and_write": false, 00:09:08.254 "abort": true, 00:09:08.254 "seek_hole": false, 00:09:08.254 "seek_data": false, 00:09:08.254 "copy": true, 00:09:08.254 "nvme_iov_md": false 00:09:08.254 }, 00:09:08.254 "memory_domains": [ 00:09:08.254 { 00:09:08.254 "dma_device_id": "system", 00:09:08.254 "dma_device_type": 1 00:09:08.254 }, 00:09:08.254 { 00:09:08.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.254 "dma_device_type": 2 00:09:08.254 } 00:09:08.254 ], 00:09:08.254 "driver_specific": {} 00:09:08.254 } 00:09:08.254 ] 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.254 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.540 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.540 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.540 "name": "Existed_Raid", 00:09:08.540 "uuid": "4251fa36-e1aa-41cc-8a34-1635a13afb02", 00:09:08.540 "strip_size_kb": 64, 00:09:08.540 "state": "configuring", 00:09:08.540 "raid_level": "raid0", 00:09:08.540 "superblock": true, 00:09:08.540 "num_base_bdevs": 3, 00:09:08.540 "num_base_bdevs_discovered": 2, 00:09:08.540 "num_base_bdevs_operational": 3, 00:09:08.540 "base_bdevs_list": [ 00:09:08.540 { 00:09:08.540 "name": "BaseBdev1", 00:09:08.540 "uuid": "718b474e-9107-478e-b394-c69f931c6a49", 00:09:08.540 "is_configured": true, 00:09:08.540 "data_offset": 2048, 00:09:08.540 "data_size": 63488 00:09:08.540 }, 00:09:08.540 { 00:09:08.540 "name": null, 00:09:08.540 "uuid": "e3b36c71-1e52-449d-9dea-9291fe8b31a9", 00:09:08.540 "is_configured": false, 00:09:08.540 "data_offset": 0, 00:09:08.540 "data_size": 63488 00:09:08.540 }, 00:09:08.540 { 00:09:08.540 "name": "BaseBdev3", 00:09:08.540 "uuid": "0b984d12-cf7c-4cda-b81f-14ab281b7f2d", 00:09:08.540 "is_configured": true, 00:09:08.540 "data_offset": 2048, 00:09:08.540 "data_size": 63488 00:09:08.540 } 00:09:08.540 ] 00:09:08.540 }' 00:09:08.540 02:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.540 02:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.807 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.807 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.807 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.807 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:08.807 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.807 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:08.807 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.808 [2024-10-13 02:23:27.409038] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.808 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.808 "name": "Existed_Raid", 00:09:08.808 "uuid": "4251fa36-e1aa-41cc-8a34-1635a13afb02", 00:09:08.808 "strip_size_kb": 64, 00:09:08.808 "state": "configuring", 00:09:08.808 "raid_level": "raid0", 00:09:08.808 "superblock": true, 00:09:08.808 "num_base_bdevs": 3, 00:09:08.808 "num_base_bdevs_discovered": 1, 00:09:08.808 "num_base_bdevs_operational": 3, 00:09:08.808 "base_bdevs_list": [ 00:09:08.808 { 00:09:08.808 "name": "BaseBdev1", 00:09:08.808 "uuid": "718b474e-9107-478e-b394-c69f931c6a49", 00:09:08.808 "is_configured": true, 00:09:08.808 "data_offset": 2048, 00:09:08.808 "data_size": 63488 00:09:08.808 }, 00:09:08.808 { 00:09:08.809 "name": null, 00:09:08.809 "uuid": "e3b36c71-1e52-449d-9dea-9291fe8b31a9", 00:09:08.809 "is_configured": false, 00:09:08.809 "data_offset": 0, 00:09:08.809 "data_size": 63488 00:09:08.809 }, 00:09:08.809 { 00:09:08.809 "name": null, 00:09:08.809 "uuid": "0b984d12-cf7c-4cda-b81f-14ab281b7f2d", 00:09:08.809 "is_configured": false, 00:09:08.809 "data_offset": 0, 00:09:08.809 "data_size": 63488 00:09:08.809 } 00:09:08.809 ] 00:09:08.809 }' 00:09:08.809 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.809 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.385 [2024-10-13 02:23:27.880200] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.385 "name": "Existed_Raid", 00:09:09.385 "uuid": "4251fa36-e1aa-41cc-8a34-1635a13afb02", 00:09:09.385 "strip_size_kb": 64, 00:09:09.385 "state": "configuring", 00:09:09.385 "raid_level": "raid0", 00:09:09.385 "superblock": true, 00:09:09.385 "num_base_bdevs": 3, 00:09:09.385 "num_base_bdevs_discovered": 2, 00:09:09.385 "num_base_bdevs_operational": 3, 00:09:09.385 "base_bdevs_list": [ 00:09:09.385 { 00:09:09.385 "name": "BaseBdev1", 00:09:09.385 "uuid": "718b474e-9107-478e-b394-c69f931c6a49", 00:09:09.385 "is_configured": true, 00:09:09.385 "data_offset": 2048, 00:09:09.385 "data_size": 63488 00:09:09.385 }, 00:09:09.385 { 00:09:09.385 "name": null, 00:09:09.385 "uuid": "e3b36c71-1e52-449d-9dea-9291fe8b31a9", 00:09:09.385 "is_configured": false, 00:09:09.385 "data_offset": 0, 00:09:09.385 "data_size": 63488 00:09:09.385 }, 00:09:09.385 { 00:09:09.385 "name": "BaseBdev3", 00:09:09.385 "uuid": "0b984d12-cf7c-4cda-b81f-14ab281b7f2d", 00:09:09.385 "is_configured": true, 00:09:09.385 "data_offset": 2048, 00:09:09.385 "data_size": 63488 00:09:09.385 } 00:09:09.385 ] 00:09:09.385 }' 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.385 02:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.644 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.644 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:09.644 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.644 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.904 [2024-10-13 02:23:28.367415] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.904 "name": "Existed_Raid", 00:09:09.904 "uuid": "4251fa36-e1aa-41cc-8a34-1635a13afb02", 00:09:09.904 "strip_size_kb": 64, 00:09:09.904 "state": "configuring", 00:09:09.904 "raid_level": "raid0", 00:09:09.904 "superblock": true, 00:09:09.904 "num_base_bdevs": 3, 00:09:09.904 "num_base_bdevs_discovered": 1, 00:09:09.904 "num_base_bdevs_operational": 3, 00:09:09.904 "base_bdevs_list": [ 00:09:09.904 { 00:09:09.904 "name": null, 00:09:09.904 "uuid": "718b474e-9107-478e-b394-c69f931c6a49", 00:09:09.904 "is_configured": false, 00:09:09.904 "data_offset": 0, 00:09:09.904 "data_size": 63488 00:09:09.904 }, 00:09:09.904 { 00:09:09.904 "name": null, 00:09:09.904 "uuid": "e3b36c71-1e52-449d-9dea-9291fe8b31a9", 00:09:09.904 "is_configured": false, 00:09:09.904 "data_offset": 0, 00:09:09.904 "data_size": 63488 00:09:09.904 }, 00:09:09.904 { 00:09:09.904 "name": "BaseBdev3", 00:09:09.904 "uuid": "0b984d12-cf7c-4cda-b81f-14ab281b7f2d", 00:09:09.904 "is_configured": true, 00:09:09.904 "data_offset": 2048, 00:09:09.904 "data_size": 63488 00:09:09.904 } 00:09:09.904 ] 00:09:09.904 }' 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.904 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.164 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.164 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.164 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.164 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:10.423 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.423 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:10.423 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:10.423 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.423 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.423 [2024-10-13 02:23:28.893169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.424 "name": "Existed_Raid", 00:09:10.424 "uuid": "4251fa36-e1aa-41cc-8a34-1635a13afb02", 00:09:10.424 "strip_size_kb": 64, 00:09:10.424 "state": "configuring", 00:09:10.424 "raid_level": "raid0", 00:09:10.424 "superblock": true, 00:09:10.424 "num_base_bdevs": 3, 00:09:10.424 "num_base_bdevs_discovered": 2, 00:09:10.424 "num_base_bdevs_operational": 3, 00:09:10.424 "base_bdevs_list": [ 00:09:10.424 { 00:09:10.424 "name": null, 00:09:10.424 "uuid": "718b474e-9107-478e-b394-c69f931c6a49", 00:09:10.424 "is_configured": false, 00:09:10.424 "data_offset": 0, 00:09:10.424 "data_size": 63488 00:09:10.424 }, 00:09:10.424 { 00:09:10.424 "name": "BaseBdev2", 00:09:10.424 "uuid": "e3b36c71-1e52-449d-9dea-9291fe8b31a9", 00:09:10.424 "is_configured": true, 00:09:10.424 "data_offset": 2048, 00:09:10.424 "data_size": 63488 00:09:10.424 }, 00:09:10.424 { 00:09:10.424 "name": "BaseBdev3", 00:09:10.424 "uuid": "0b984d12-cf7c-4cda-b81f-14ab281b7f2d", 00:09:10.424 "is_configured": true, 00:09:10.424 "data_offset": 2048, 00:09:10.424 "data_size": 63488 00:09:10.424 } 00:09:10.424 ] 00:09:10.424 }' 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.424 02:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.683 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.683 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.683 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:10.683 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.683 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.683 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:10.683 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:10.683 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.683 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.683 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 718b474e-9107-478e-b394-c69f931c6a49 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.944 [2024-10-13 02:23:29.391218] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:10.944 [2024-10-13 02:23:29.391376] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:10.944 [2024-10-13 02:23:29.391391] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.944 [2024-10-13 02:23:29.391616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:10.944 [2024-10-13 02:23:29.391733] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:10.944 [2024-10-13 02:23:29.391743] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:10.944 [2024-10-13 02:23:29.391844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.944 NewBaseBdev 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.944 [ 00:09:10.944 { 00:09:10.944 "name": "NewBaseBdev", 00:09:10.944 "aliases": [ 00:09:10.944 "718b474e-9107-478e-b394-c69f931c6a49" 00:09:10.944 ], 00:09:10.944 "product_name": "Malloc disk", 00:09:10.944 "block_size": 512, 00:09:10.944 "num_blocks": 65536, 00:09:10.944 "uuid": "718b474e-9107-478e-b394-c69f931c6a49", 00:09:10.944 "assigned_rate_limits": { 00:09:10.944 "rw_ios_per_sec": 0, 00:09:10.944 "rw_mbytes_per_sec": 0, 00:09:10.944 "r_mbytes_per_sec": 0, 00:09:10.944 "w_mbytes_per_sec": 0 00:09:10.944 }, 00:09:10.944 "claimed": true, 00:09:10.944 "claim_type": "exclusive_write", 00:09:10.944 "zoned": false, 00:09:10.944 "supported_io_types": { 00:09:10.944 "read": true, 00:09:10.944 "write": true, 00:09:10.944 "unmap": true, 00:09:10.944 "flush": true, 00:09:10.944 "reset": true, 00:09:10.944 "nvme_admin": false, 00:09:10.944 "nvme_io": false, 00:09:10.944 "nvme_io_md": false, 00:09:10.944 "write_zeroes": true, 00:09:10.944 "zcopy": true, 00:09:10.944 "get_zone_info": false, 00:09:10.944 "zone_management": false, 00:09:10.944 "zone_append": false, 00:09:10.944 "compare": false, 00:09:10.944 "compare_and_write": false, 00:09:10.944 "abort": true, 00:09:10.944 "seek_hole": false, 00:09:10.944 "seek_data": false, 00:09:10.944 "copy": true, 00:09:10.944 "nvme_iov_md": false 00:09:10.944 }, 00:09:10.944 "memory_domains": [ 00:09:10.944 { 00:09:10.944 "dma_device_id": "system", 00:09:10.944 "dma_device_type": 1 00:09:10.944 }, 00:09:10.944 { 00:09:10.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.944 "dma_device_type": 2 00:09:10.944 } 00:09:10.944 ], 00:09:10.944 "driver_specific": {} 00:09:10.944 } 00:09:10.944 ] 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.944 "name": "Existed_Raid", 00:09:10.944 "uuid": "4251fa36-e1aa-41cc-8a34-1635a13afb02", 00:09:10.944 "strip_size_kb": 64, 00:09:10.944 "state": "online", 00:09:10.944 "raid_level": "raid0", 00:09:10.944 "superblock": true, 00:09:10.944 "num_base_bdevs": 3, 00:09:10.944 "num_base_bdevs_discovered": 3, 00:09:10.944 "num_base_bdevs_operational": 3, 00:09:10.944 "base_bdevs_list": [ 00:09:10.944 { 00:09:10.944 "name": "NewBaseBdev", 00:09:10.944 "uuid": "718b474e-9107-478e-b394-c69f931c6a49", 00:09:10.944 "is_configured": true, 00:09:10.944 "data_offset": 2048, 00:09:10.944 "data_size": 63488 00:09:10.944 }, 00:09:10.944 { 00:09:10.944 "name": "BaseBdev2", 00:09:10.944 "uuid": "e3b36c71-1e52-449d-9dea-9291fe8b31a9", 00:09:10.944 "is_configured": true, 00:09:10.944 "data_offset": 2048, 00:09:10.944 "data_size": 63488 00:09:10.944 }, 00:09:10.944 { 00:09:10.944 "name": "BaseBdev3", 00:09:10.944 "uuid": "0b984d12-cf7c-4cda-b81f-14ab281b7f2d", 00:09:10.944 "is_configured": true, 00:09:10.944 "data_offset": 2048, 00:09:10.944 "data_size": 63488 00:09:10.944 } 00:09:10.944 ] 00:09:10.944 }' 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.944 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.204 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:11.204 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:11.204 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.204 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.204 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.204 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.204 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.204 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:11.204 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.204 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.464 [2024-10-13 02:23:29.886807] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.464 02:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.464 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.464 "name": "Existed_Raid", 00:09:11.464 "aliases": [ 00:09:11.464 "4251fa36-e1aa-41cc-8a34-1635a13afb02" 00:09:11.464 ], 00:09:11.464 "product_name": "Raid Volume", 00:09:11.464 "block_size": 512, 00:09:11.464 "num_blocks": 190464, 00:09:11.464 "uuid": "4251fa36-e1aa-41cc-8a34-1635a13afb02", 00:09:11.464 "assigned_rate_limits": { 00:09:11.464 "rw_ios_per_sec": 0, 00:09:11.464 "rw_mbytes_per_sec": 0, 00:09:11.464 "r_mbytes_per_sec": 0, 00:09:11.464 "w_mbytes_per_sec": 0 00:09:11.464 }, 00:09:11.464 "claimed": false, 00:09:11.464 "zoned": false, 00:09:11.464 "supported_io_types": { 00:09:11.464 "read": true, 00:09:11.464 "write": true, 00:09:11.464 "unmap": true, 00:09:11.464 "flush": true, 00:09:11.464 "reset": true, 00:09:11.464 "nvme_admin": false, 00:09:11.464 "nvme_io": false, 00:09:11.464 "nvme_io_md": false, 00:09:11.464 "write_zeroes": true, 00:09:11.464 "zcopy": false, 00:09:11.464 "get_zone_info": false, 00:09:11.464 "zone_management": false, 00:09:11.464 "zone_append": false, 00:09:11.464 "compare": false, 00:09:11.464 "compare_and_write": false, 00:09:11.464 "abort": false, 00:09:11.464 "seek_hole": false, 00:09:11.464 "seek_data": false, 00:09:11.464 "copy": false, 00:09:11.464 "nvme_iov_md": false 00:09:11.464 }, 00:09:11.464 "memory_domains": [ 00:09:11.464 { 00:09:11.464 "dma_device_id": "system", 00:09:11.464 "dma_device_type": 1 00:09:11.464 }, 00:09:11.464 { 00:09:11.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.464 "dma_device_type": 2 00:09:11.464 }, 00:09:11.464 { 00:09:11.464 "dma_device_id": "system", 00:09:11.464 "dma_device_type": 1 00:09:11.464 }, 00:09:11.464 { 00:09:11.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.464 "dma_device_type": 2 00:09:11.465 }, 00:09:11.465 { 00:09:11.465 "dma_device_id": "system", 00:09:11.465 "dma_device_type": 1 00:09:11.465 }, 00:09:11.465 { 00:09:11.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.465 "dma_device_type": 2 00:09:11.465 } 00:09:11.465 ], 00:09:11.465 "driver_specific": { 00:09:11.465 "raid": { 00:09:11.465 "uuid": "4251fa36-e1aa-41cc-8a34-1635a13afb02", 00:09:11.465 "strip_size_kb": 64, 00:09:11.465 "state": "online", 00:09:11.465 "raid_level": "raid0", 00:09:11.465 "superblock": true, 00:09:11.465 "num_base_bdevs": 3, 00:09:11.465 "num_base_bdevs_discovered": 3, 00:09:11.465 "num_base_bdevs_operational": 3, 00:09:11.465 "base_bdevs_list": [ 00:09:11.465 { 00:09:11.465 "name": "NewBaseBdev", 00:09:11.465 "uuid": "718b474e-9107-478e-b394-c69f931c6a49", 00:09:11.465 "is_configured": true, 00:09:11.465 "data_offset": 2048, 00:09:11.465 "data_size": 63488 00:09:11.465 }, 00:09:11.465 { 00:09:11.465 "name": "BaseBdev2", 00:09:11.465 "uuid": "e3b36c71-1e52-449d-9dea-9291fe8b31a9", 00:09:11.465 "is_configured": true, 00:09:11.465 "data_offset": 2048, 00:09:11.465 "data_size": 63488 00:09:11.465 }, 00:09:11.465 { 00:09:11.465 "name": "BaseBdev3", 00:09:11.465 "uuid": "0b984d12-cf7c-4cda-b81f-14ab281b7f2d", 00:09:11.465 "is_configured": true, 00:09:11.465 "data_offset": 2048, 00:09:11.465 "data_size": 63488 00:09:11.465 } 00:09:11.465 ] 00:09:11.465 } 00:09:11.465 } 00:09:11.465 }' 00:09:11.465 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.465 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:11.465 BaseBdev2 00:09:11.465 BaseBdev3' 00:09:11.465 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.465 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.465 02:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.465 [2024-10-13 02:23:30.134056] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:11.465 [2024-10-13 02:23:30.134085] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.465 [2024-10-13 02:23:30.134150] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.465 [2024-10-13 02:23:30.134200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.465 [2024-10-13 02:23:30.134212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75520 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75520 ']' 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75520 00:09:11.465 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:11.730 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.730 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75520 00:09:11.730 killing process with pid 75520 00:09:11.730 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:11.730 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:11.730 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75520' 00:09:11.730 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75520 00:09:11.730 [2024-10-13 02:23:30.170285] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.730 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75520 00:09:11.730 [2024-10-13 02:23:30.201182] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:11.992 02:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:11.992 00:09:11.992 real 0m8.928s 00:09:11.992 user 0m15.151s 00:09:11.992 sys 0m1.920s 00:09:11.992 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.992 02:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.992 ************************************ 00:09:11.992 END TEST raid_state_function_test_sb 00:09:11.992 ************************************ 00:09:11.992 02:23:30 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:11.992 02:23:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:11.992 02:23:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.992 02:23:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:11.992 ************************************ 00:09:11.992 START TEST raid_superblock_test 00:09:11.992 ************************************ 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76129 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76129 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 76129 ']' 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:11.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:11.992 02:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.992 [2024-10-13 02:23:30.614035] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:11.992 [2024-10-13 02:23:30.614192] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76129 ] 00:09:12.252 [2024-10-13 02:23:30.759990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.252 [2024-10-13 02:23:30.804711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.252 [2024-10-13 02:23:30.846504] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.252 [2024-10-13 02:23:30.846551] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.822 malloc1 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.822 [2024-10-13 02:23:31.476500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:12.822 [2024-10-13 02:23:31.476573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.822 [2024-10-13 02:23:31.476591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:12.822 [2024-10-13 02:23:31.476605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.822 [2024-10-13 02:23:31.478775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.822 [2024-10-13 02:23:31.478816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:12.822 pt1 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.822 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.082 malloc2 00:09:13.082 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.082 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.083 [2024-10-13 02:23:31.519157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:13.083 [2024-10-13 02:23:31.519230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.083 [2024-10-13 02:23:31.519251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:13.083 [2024-10-13 02:23:31.519268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.083 [2024-10-13 02:23:31.522238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.083 [2024-10-13 02:23:31.522286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:13.083 pt2 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.083 malloc3 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.083 [2024-10-13 02:23:31.547737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:13.083 [2024-10-13 02:23:31.547798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.083 [2024-10-13 02:23:31.547816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:13.083 [2024-10-13 02:23:31.547827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.083 [2024-10-13 02:23:31.549922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.083 [2024-10-13 02:23:31.549961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:13.083 pt3 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.083 [2024-10-13 02:23:31.559804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:13.083 [2024-10-13 02:23:31.561638] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:13.083 [2024-10-13 02:23:31.561696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:13.083 [2024-10-13 02:23:31.561842] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:13.083 [2024-10-13 02:23:31.561860] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:13.083 [2024-10-13 02:23:31.562108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:13.083 [2024-10-13 02:23:31.562240] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:13.083 [2024-10-13 02:23:31.562267] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:13.083 [2024-10-13 02:23:31.562385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.083 "name": "raid_bdev1", 00:09:13.083 "uuid": "6570468b-842f-4114-9fba-4ad713fca182", 00:09:13.083 "strip_size_kb": 64, 00:09:13.083 "state": "online", 00:09:13.083 "raid_level": "raid0", 00:09:13.083 "superblock": true, 00:09:13.083 "num_base_bdevs": 3, 00:09:13.083 "num_base_bdevs_discovered": 3, 00:09:13.083 "num_base_bdevs_operational": 3, 00:09:13.083 "base_bdevs_list": [ 00:09:13.083 { 00:09:13.083 "name": "pt1", 00:09:13.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:13.083 "is_configured": true, 00:09:13.083 "data_offset": 2048, 00:09:13.083 "data_size": 63488 00:09:13.083 }, 00:09:13.083 { 00:09:13.083 "name": "pt2", 00:09:13.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:13.083 "is_configured": true, 00:09:13.083 "data_offset": 2048, 00:09:13.083 "data_size": 63488 00:09:13.083 }, 00:09:13.083 { 00:09:13.083 "name": "pt3", 00:09:13.083 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:13.083 "is_configured": true, 00:09:13.083 "data_offset": 2048, 00:09:13.083 "data_size": 63488 00:09:13.083 } 00:09:13.083 ] 00:09:13.083 }' 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.083 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.343 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:13.343 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:13.343 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:13.343 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:13.343 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:13.343 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:13.343 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:13.343 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.343 02:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:13.343 02:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.343 [2024-10-13 02:23:31.999313] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.343 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.603 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:13.603 "name": "raid_bdev1", 00:09:13.603 "aliases": [ 00:09:13.603 "6570468b-842f-4114-9fba-4ad713fca182" 00:09:13.603 ], 00:09:13.603 "product_name": "Raid Volume", 00:09:13.603 "block_size": 512, 00:09:13.603 "num_blocks": 190464, 00:09:13.603 "uuid": "6570468b-842f-4114-9fba-4ad713fca182", 00:09:13.603 "assigned_rate_limits": { 00:09:13.603 "rw_ios_per_sec": 0, 00:09:13.603 "rw_mbytes_per_sec": 0, 00:09:13.603 "r_mbytes_per_sec": 0, 00:09:13.603 "w_mbytes_per_sec": 0 00:09:13.603 }, 00:09:13.603 "claimed": false, 00:09:13.603 "zoned": false, 00:09:13.603 "supported_io_types": { 00:09:13.603 "read": true, 00:09:13.603 "write": true, 00:09:13.603 "unmap": true, 00:09:13.603 "flush": true, 00:09:13.603 "reset": true, 00:09:13.603 "nvme_admin": false, 00:09:13.603 "nvme_io": false, 00:09:13.603 "nvme_io_md": false, 00:09:13.603 "write_zeroes": true, 00:09:13.603 "zcopy": false, 00:09:13.603 "get_zone_info": false, 00:09:13.603 "zone_management": false, 00:09:13.603 "zone_append": false, 00:09:13.603 "compare": false, 00:09:13.603 "compare_and_write": false, 00:09:13.603 "abort": false, 00:09:13.603 "seek_hole": false, 00:09:13.603 "seek_data": false, 00:09:13.603 "copy": false, 00:09:13.603 "nvme_iov_md": false 00:09:13.603 }, 00:09:13.603 "memory_domains": [ 00:09:13.603 { 00:09:13.603 "dma_device_id": "system", 00:09:13.603 "dma_device_type": 1 00:09:13.603 }, 00:09:13.603 { 00:09:13.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.603 "dma_device_type": 2 00:09:13.603 }, 00:09:13.604 { 00:09:13.604 "dma_device_id": "system", 00:09:13.604 "dma_device_type": 1 00:09:13.604 }, 00:09:13.604 { 00:09:13.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.604 "dma_device_type": 2 00:09:13.604 }, 00:09:13.604 { 00:09:13.604 "dma_device_id": "system", 00:09:13.604 "dma_device_type": 1 00:09:13.604 }, 00:09:13.604 { 00:09:13.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.604 "dma_device_type": 2 00:09:13.604 } 00:09:13.604 ], 00:09:13.604 "driver_specific": { 00:09:13.604 "raid": { 00:09:13.604 "uuid": "6570468b-842f-4114-9fba-4ad713fca182", 00:09:13.604 "strip_size_kb": 64, 00:09:13.604 "state": "online", 00:09:13.604 "raid_level": "raid0", 00:09:13.604 "superblock": true, 00:09:13.604 "num_base_bdevs": 3, 00:09:13.604 "num_base_bdevs_discovered": 3, 00:09:13.604 "num_base_bdevs_operational": 3, 00:09:13.604 "base_bdevs_list": [ 00:09:13.604 { 00:09:13.604 "name": "pt1", 00:09:13.604 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:13.604 "is_configured": true, 00:09:13.604 "data_offset": 2048, 00:09:13.604 "data_size": 63488 00:09:13.604 }, 00:09:13.604 { 00:09:13.604 "name": "pt2", 00:09:13.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:13.604 "is_configured": true, 00:09:13.604 "data_offset": 2048, 00:09:13.604 "data_size": 63488 00:09:13.604 }, 00:09:13.604 { 00:09:13.604 "name": "pt3", 00:09:13.604 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:13.604 "is_configured": true, 00:09:13.604 "data_offset": 2048, 00:09:13.604 "data_size": 63488 00:09:13.604 } 00:09:13.604 ] 00:09:13.604 } 00:09:13.604 } 00:09:13.604 }' 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:13.604 pt2 00:09:13.604 pt3' 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.604 [2024-10-13 02:23:32.238782] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6570468b-842f-4114-9fba-4ad713fca182 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6570468b-842f-4114-9fba-4ad713fca182 ']' 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.604 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.864 [2024-10-13 02:23:32.286472] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:13.864 [2024-10-13 02:23:32.286549] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.864 [2024-10-13 02:23:32.286645] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.864 [2024-10-13 02:23:32.286726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.864 [2024-10-13 02:23:32.286774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.864 [2024-10-13 02:23:32.442224] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:13.864 [2024-10-13 02:23:32.444103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:13.864 [2024-10-13 02:23:32.444187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:13.864 [2024-10-13 02:23:32.444268] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:13.864 [2024-10-13 02:23:32.444344] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:13.864 [2024-10-13 02:23:32.444401] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:13.864 [2024-10-13 02:23:32.444460] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:13.864 [2024-10-13 02:23:32.444508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:13.864 request: 00:09:13.864 { 00:09:13.864 "name": "raid_bdev1", 00:09:13.864 "raid_level": "raid0", 00:09:13.864 "base_bdevs": [ 00:09:13.864 "malloc1", 00:09:13.864 "malloc2", 00:09:13.864 "malloc3" 00:09:13.864 ], 00:09:13.864 "strip_size_kb": 64, 00:09:13.864 "superblock": false, 00:09:13.864 "method": "bdev_raid_create", 00:09:13.864 "req_id": 1 00:09:13.864 } 00:09:13.864 Got JSON-RPC error response 00:09:13.864 response: 00:09:13.864 { 00:09:13.864 "code": -17, 00:09:13.864 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:13.864 } 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:13.864 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.865 [2024-10-13 02:23:32.510078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:13.865 [2024-10-13 02:23:32.510176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.865 [2024-10-13 02:23:32.510207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:13.865 [2024-10-13 02:23:32.510236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.865 [2024-10-13 02:23:32.512379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.865 [2024-10-13 02:23:32.512456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:13.865 [2024-10-13 02:23:32.512540] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:13.865 [2024-10-13 02:23:32.512593] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:13.865 pt1 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.865 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.124 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.124 "name": "raid_bdev1", 00:09:14.124 "uuid": "6570468b-842f-4114-9fba-4ad713fca182", 00:09:14.125 "strip_size_kb": 64, 00:09:14.125 "state": "configuring", 00:09:14.125 "raid_level": "raid0", 00:09:14.125 "superblock": true, 00:09:14.125 "num_base_bdevs": 3, 00:09:14.125 "num_base_bdevs_discovered": 1, 00:09:14.125 "num_base_bdevs_operational": 3, 00:09:14.125 "base_bdevs_list": [ 00:09:14.125 { 00:09:14.125 "name": "pt1", 00:09:14.125 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:14.125 "is_configured": true, 00:09:14.125 "data_offset": 2048, 00:09:14.125 "data_size": 63488 00:09:14.125 }, 00:09:14.125 { 00:09:14.125 "name": null, 00:09:14.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:14.125 "is_configured": false, 00:09:14.125 "data_offset": 2048, 00:09:14.125 "data_size": 63488 00:09:14.125 }, 00:09:14.125 { 00:09:14.125 "name": null, 00:09:14.125 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:14.125 "is_configured": false, 00:09:14.125 "data_offset": 2048, 00:09:14.125 "data_size": 63488 00:09:14.125 } 00:09:14.125 ] 00:09:14.125 }' 00:09:14.125 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.125 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.384 [2024-10-13 02:23:32.933385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:14.384 [2024-10-13 02:23:32.933521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.384 [2024-10-13 02:23:32.933561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:14.384 [2024-10-13 02:23:32.933594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.384 [2024-10-13 02:23:32.933998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.384 [2024-10-13 02:23:32.934057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:14.384 [2024-10-13 02:23:32.934152] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:14.384 [2024-10-13 02:23:32.934204] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:14.384 pt2 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.384 [2024-10-13 02:23:32.945354] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.384 02:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.384 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.384 "name": "raid_bdev1", 00:09:14.384 "uuid": "6570468b-842f-4114-9fba-4ad713fca182", 00:09:14.384 "strip_size_kb": 64, 00:09:14.384 "state": "configuring", 00:09:14.384 "raid_level": "raid0", 00:09:14.384 "superblock": true, 00:09:14.384 "num_base_bdevs": 3, 00:09:14.384 "num_base_bdevs_discovered": 1, 00:09:14.384 "num_base_bdevs_operational": 3, 00:09:14.384 "base_bdevs_list": [ 00:09:14.384 { 00:09:14.384 "name": "pt1", 00:09:14.384 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:14.384 "is_configured": true, 00:09:14.384 "data_offset": 2048, 00:09:14.384 "data_size": 63488 00:09:14.384 }, 00:09:14.384 { 00:09:14.384 "name": null, 00:09:14.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:14.384 "is_configured": false, 00:09:14.384 "data_offset": 0, 00:09:14.385 "data_size": 63488 00:09:14.385 }, 00:09:14.385 { 00:09:14.385 "name": null, 00:09:14.385 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:14.385 "is_configured": false, 00:09:14.385 "data_offset": 2048, 00:09:14.385 "data_size": 63488 00:09:14.385 } 00:09:14.385 ] 00:09:14.385 }' 00:09:14.385 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.385 02:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.954 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:14.954 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:14.954 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:14.954 02:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.954 02:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.954 [2024-10-13 02:23:33.460574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:14.954 [2024-10-13 02:23:33.460711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.954 [2024-10-13 02:23:33.460750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:14.954 [2024-10-13 02:23:33.460778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.954 [2024-10-13 02:23:33.461229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.954 [2024-10-13 02:23:33.461303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:14.954 [2024-10-13 02:23:33.461431] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:14.954 [2024-10-13 02:23:33.461487] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:14.954 pt2 00:09:14.954 02:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.954 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:14.954 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.955 [2024-10-13 02:23:33.472513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:14.955 [2024-10-13 02:23:33.472600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.955 [2024-10-13 02:23:33.472634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:14.955 [2024-10-13 02:23:33.472662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.955 [2024-10-13 02:23:33.472992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.955 [2024-10-13 02:23:33.473051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:14.955 [2024-10-13 02:23:33.473129] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:14.955 [2024-10-13 02:23:33.473180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:14.955 [2024-10-13 02:23:33.473291] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:14.955 [2024-10-13 02:23:33.473327] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:14.955 [2024-10-13 02:23:33.473564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:14.955 [2024-10-13 02:23:33.473698] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:14.955 [2024-10-13 02:23:33.473737] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:14.955 [2024-10-13 02:23:33.473863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.955 pt3 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.955 "name": "raid_bdev1", 00:09:14.955 "uuid": "6570468b-842f-4114-9fba-4ad713fca182", 00:09:14.955 "strip_size_kb": 64, 00:09:14.955 "state": "online", 00:09:14.955 "raid_level": "raid0", 00:09:14.955 "superblock": true, 00:09:14.955 "num_base_bdevs": 3, 00:09:14.955 "num_base_bdevs_discovered": 3, 00:09:14.955 "num_base_bdevs_operational": 3, 00:09:14.955 "base_bdevs_list": [ 00:09:14.955 { 00:09:14.955 "name": "pt1", 00:09:14.955 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:14.955 "is_configured": true, 00:09:14.955 "data_offset": 2048, 00:09:14.955 "data_size": 63488 00:09:14.955 }, 00:09:14.955 { 00:09:14.955 "name": "pt2", 00:09:14.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:14.955 "is_configured": true, 00:09:14.955 "data_offset": 2048, 00:09:14.955 "data_size": 63488 00:09:14.955 }, 00:09:14.955 { 00:09:14.955 "name": "pt3", 00:09:14.955 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:14.955 "is_configured": true, 00:09:14.955 "data_offset": 2048, 00:09:14.955 "data_size": 63488 00:09:14.955 } 00:09:14.955 ] 00:09:14.955 }' 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.955 02:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.524 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:15.524 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:15.524 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:15.524 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:15.524 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:15.524 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:15.524 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:15.524 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:15.524 02:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.524 02:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.524 [2024-10-13 02:23:33.932050] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.524 02:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.524 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:15.524 "name": "raid_bdev1", 00:09:15.524 "aliases": [ 00:09:15.524 "6570468b-842f-4114-9fba-4ad713fca182" 00:09:15.524 ], 00:09:15.524 "product_name": "Raid Volume", 00:09:15.524 "block_size": 512, 00:09:15.524 "num_blocks": 190464, 00:09:15.524 "uuid": "6570468b-842f-4114-9fba-4ad713fca182", 00:09:15.524 "assigned_rate_limits": { 00:09:15.524 "rw_ios_per_sec": 0, 00:09:15.524 "rw_mbytes_per_sec": 0, 00:09:15.524 "r_mbytes_per_sec": 0, 00:09:15.524 "w_mbytes_per_sec": 0 00:09:15.524 }, 00:09:15.524 "claimed": false, 00:09:15.524 "zoned": false, 00:09:15.524 "supported_io_types": { 00:09:15.524 "read": true, 00:09:15.524 "write": true, 00:09:15.524 "unmap": true, 00:09:15.524 "flush": true, 00:09:15.524 "reset": true, 00:09:15.524 "nvme_admin": false, 00:09:15.524 "nvme_io": false, 00:09:15.524 "nvme_io_md": false, 00:09:15.524 "write_zeroes": true, 00:09:15.524 "zcopy": false, 00:09:15.524 "get_zone_info": false, 00:09:15.524 "zone_management": false, 00:09:15.524 "zone_append": false, 00:09:15.524 "compare": false, 00:09:15.524 "compare_and_write": false, 00:09:15.524 "abort": false, 00:09:15.524 "seek_hole": false, 00:09:15.524 "seek_data": false, 00:09:15.524 "copy": false, 00:09:15.524 "nvme_iov_md": false 00:09:15.524 }, 00:09:15.524 "memory_domains": [ 00:09:15.524 { 00:09:15.524 "dma_device_id": "system", 00:09:15.524 "dma_device_type": 1 00:09:15.524 }, 00:09:15.524 { 00:09:15.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.524 "dma_device_type": 2 00:09:15.524 }, 00:09:15.524 { 00:09:15.524 "dma_device_id": "system", 00:09:15.524 "dma_device_type": 1 00:09:15.524 }, 00:09:15.524 { 00:09:15.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.524 "dma_device_type": 2 00:09:15.525 }, 00:09:15.525 { 00:09:15.525 "dma_device_id": "system", 00:09:15.525 "dma_device_type": 1 00:09:15.525 }, 00:09:15.525 { 00:09:15.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.525 "dma_device_type": 2 00:09:15.525 } 00:09:15.525 ], 00:09:15.525 "driver_specific": { 00:09:15.525 "raid": { 00:09:15.525 "uuid": "6570468b-842f-4114-9fba-4ad713fca182", 00:09:15.525 "strip_size_kb": 64, 00:09:15.525 "state": "online", 00:09:15.525 "raid_level": "raid0", 00:09:15.525 "superblock": true, 00:09:15.525 "num_base_bdevs": 3, 00:09:15.525 "num_base_bdevs_discovered": 3, 00:09:15.525 "num_base_bdevs_operational": 3, 00:09:15.525 "base_bdevs_list": [ 00:09:15.525 { 00:09:15.525 "name": "pt1", 00:09:15.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:15.525 "is_configured": true, 00:09:15.525 "data_offset": 2048, 00:09:15.525 "data_size": 63488 00:09:15.525 }, 00:09:15.525 { 00:09:15.525 "name": "pt2", 00:09:15.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:15.525 "is_configured": true, 00:09:15.525 "data_offset": 2048, 00:09:15.525 "data_size": 63488 00:09:15.525 }, 00:09:15.525 { 00:09:15.525 "name": "pt3", 00:09:15.525 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:15.525 "is_configured": true, 00:09:15.525 "data_offset": 2048, 00:09:15.525 "data_size": 63488 00:09:15.525 } 00:09:15.525 ] 00:09:15.525 } 00:09:15.525 } 00:09:15.525 }' 00:09:15.525 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:15.525 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:15.525 pt2 00:09:15.525 pt3' 00:09:15.525 02:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:15.525 [2024-10-13 02:23:34.175580] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.525 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.785 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6570468b-842f-4114-9fba-4ad713fca182 '!=' 6570468b-842f-4114-9fba-4ad713fca182 ']' 00:09:15.785 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:15.785 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:15.785 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:15.785 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76129 00:09:15.785 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 76129 ']' 00:09:15.785 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 76129 00:09:15.785 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:15.785 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:15.785 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76129 00:09:15.785 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:15.785 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:15.785 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76129' 00:09:15.785 killing process with pid 76129 00:09:15.785 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 76129 00:09:15.785 [2024-10-13 02:23:34.263848] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.785 [2024-10-13 02:23:34.264018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.785 [2024-10-13 02:23:34.264118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 76129 00:09:15.785 ee all in destruct 00:09:15.785 [2024-10-13 02:23:34.264187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:15.785 [2024-10-13 02:23:34.297514] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.045 02:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:16.045 00:09:16.045 real 0m4.018s 00:09:16.045 user 0m6.307s 00:09:16.045 sys 0m0.884s 00:09:16.045 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.045 02:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.045 ************************************ 00:09:16.045 END TEST raid_superblock_test 00:09:16.045 ************************************ 00:09:16.045 02:23:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:16.045 02:23:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:16.045 02:23:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.045 02:23:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.045 ************************************ 00:09:16.045 START TEST raid_read_error_test 00:09:16.045 ************************************ 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wqx5GqOp3g 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76371 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76371 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76371 ']' 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.045 02:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.305 [2024-10-13 02:23:34.735286] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:16.305 [2024-10-13 02:23:34.735517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76371 ] 00:09:16.305 [2024-10-13 02:23:34.883411] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.305 [2024-10-13 02:23:34.928223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.305 [2024-10-13 02:23:34.969930] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.305 [2024-10-13 02:23:34.970052] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.874 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.874 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:16.874 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:16.874 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:16.874 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.874 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.140 BaseBdev1_malloc 00:09:17.140 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.140 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:17.140 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.140 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.140 true 00:09:17.140 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.140 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:17.140 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.140 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.140 [2024-10-13 02:23:35.587916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:17.140 [2024-10-13 02:23:35.587979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.140 [2024-10-13 02:23:35.588007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:17.140 [2024-10-13 02:23:35.588023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.140 [2024-10-13 02:23:35.590036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.140 [2024-10-13 02:23:35.590072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:17.140 BaseBdev1 00:09:17.140 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.140 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.140 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:17.140 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.140 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.140 BaseBdev2_malloc 00:09:17.140 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.140 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:17.140 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.141 true 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.141 [2024-10-13 02:23:35.638131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:17.141 [2024-10-13 02:23:35.638270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.141 [2024-10-13 02:23:35.638295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:17.141 [2024-10-13 02:23:35.638304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.141 [2024-10-13 02:23:35.640365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.141 [2024-10-13 02:23:35.640400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:17.141 BaseBdev2 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.141 BaseBdev3_malloc 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.141 true 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.141 [2024-10-13 02:23:35.678532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:17.141 [2024-10-13 02:23:35.678576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.141 [2024-10-13 02:23:35.678594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:17.141 [2024-10-13 02:23:35.678603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.141 [2024-10-13 02:23:35.680546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.141 [2024-10-13 02:23:35.680579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:17.141 BaseBdev3 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.141 [2024-10-13 02:23:35.690585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.141 [2024-10-13 02:23:35.692266] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.141 [2024-10-13 02:23:35.692339] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.141 [2024-10-13 02:23:35.692502] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:17.141 [2024-10-13 02:23:35.692522] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:17.141 [2024-10-13 02:23:35.692735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:17.141 [2024-10-13 02:23:35.692862] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:17.141 [2024-10-13 02:23:35.692894] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:17.141 [2024-10-13 02:23:35.693002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.141 "name": "raid_bdev1", 00:09:17.141 "uuid": "a0261c8d-abc4-4470-a928-065d733fdbdd", 00:09:17.141 "strip_size_kb": 64, 00:09:17.141 "state": "online", 00:09:17.141 "raid_level": "raid0", 00:09:17.141 "superblock": true, 00:09:17.141 "num_base_bdevs": 3, 00:09:17.141 "num_base_bdevs_discovered": 3, 00:09:17.141 "num_base_bdevs_operational": 3, 00:09:17.141 "base_bdevs_list": [ 00:09:17.141 { 00:09:17.141 "name": "BaseBdev1", 00:09:17.141 "uuid": "83682e37-0609-5a68-9217-76e995c327a5", 00:09:17.141 "is_configured": true, 00:09:17.141 "data_offset": 2048, 00:09:17.141 "data_size": 63488 00:09:17.141 }, 00:09:17.141 { 00:09:17.141 "name": "BaseBdev2", 00:09:17.141 "uuid": "1779d7fb-676d-5d97-8e9f-10ceec55fa8e", 00:09:17.141 "is_configured": true, 00:09:17.141 "data_offset": 2048, 00:09:17.141 "data_size": 63488 00:09:17.141 }, 00:09:17.141 { 00:09:17.141 "name": "BaseBdev3", 00:09:17.141 "uuid": "aeb62165-58ad-551e-aced-de825629279e", 00:09:17.141 "is_configured": true, 00:09:17.141 "data_offset": 2048, 00:09:17.141 "data_size": 63488 00:09:17.141 } 00:09:17.141 ] 00:09:17.141 }' 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.141 02:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.723 02:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:17.723 02:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:17.723 [2024-10-13 02:23:36.238019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.662 "name": "raid_bdev1", 00:09:18.662 "uuid": "a0261c8d-abc4-4470-a928-065d733fdbdd", 00:09:18.662 "strip_size_kb": 64, 00:09:18.662 "state": "online", 00:09:18.662 "raid_level": "raid0", 00:09:18.662 "superblock": true, 00:09:18.662 "num_base_bdevs": 3, 00:09:18.662 "num_base_bdevs_discovered": 3, 00:09:18.662 "num_base_bdevs_operational": 3, 00:09:18.662 "base_bdevs_list": [ 00:09:18.662 { 00:09:18.662 "name": "BaseBdev1", 00:09:18.662 "uuid": "83682e37-0609-5a68-9217-76e995c327a5", 00:09:18.662 "is_configured": true, 00:09:18.662 "data_offset": 2048, 00:09:18.662 "data_size": 63488 00:09:18.662 }, 00:09:18.662 { 00:09:18.662 "name": "BaseBdev2", 00:09:18.662 "uuid": "1779d7fb-676d-5d97-8e9f-10ceec55fa8e", 00:09:18.662 "is_configured": true, 00:09:18.662 "data_offset": 2048, 00:09:18.662 "data_size": 63488 00:09:18.662 }, 00:09:18.662 { 00:09:18.662 "name": "BaseBdev3", 00:09:18.662 "uuid": "aeb62165-58ad-551e-aced-de825629279e", 00:09:18.662 "is_configured": true, 00:09:18.662 "data_offset": 2048, 00:09:18.662 "data_size": 63488 00:09:18.662 } 00:09:18.662 ] 00:09:18.662 }' 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.662 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.921 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:18.921 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.921 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.921 [2024-10-13 02:23:37.601587] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:18.921 [2024-10-13 02:23:37.601637] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.181 [2024-10-13 02:23:37.604095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.181 [2024-10-13 02:23:37.604151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.181 [2024-10-13 02:23:37.604185] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.181 [2024-10-13 02:23:37.604196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:19.181 { 00:09:19.181 "results": [ 00:09:19.181 { 00:09:19.181 "job": "raid_bdev1", 00:09:19.181 "core_mask": "0x1", 00:09:19.181 "workload": "randrw", 00:09:19.181 "percentage": 50, 00:09:19.181 "status": "finished", 00:09:19.181 "queue_depth": 1, 00:09:19.181 "io_size": 131072, 00:09:19.181 "runtime": 1.364491, 00:09:19.181 "iops": 17116.27266138069, 00:09:19.181 "mibps": 2139.5340826725865, 00:09:19.181 "io_failed": 1, 00:09:19.181 "io_timeout": 0, 00:09:19.181 "avg_latency_us": 81.06886041831353, 00:09:19.181 "min_latency_us": 19.004366812227076, 00:09:19.181 "max_latency_us": 1366.5257641921398 00:09:19.181 } 00:09:19.181 ], 00:09:19.181 "core_count": 1 00:09:19.181 } 00:09:19.181 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.181 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76371 00:09:19.181 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76371 ']' 00:09:19.181 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76371 00:09:19.181 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:19.181 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:19.181 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76371 00:09:19.181 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:19.181 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:19.181 killing process with pid 76371 00:09:19.181 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76371' 00:09:19.181 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76371 00:09:19.181 [2024-10-13 02:23:37.648525] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:19.181 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76371 00:09:19.181 [2024-10-13 02:23:37.673771] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:19.449 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wqx5GqOp3g 00:09:19.449 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:19.449 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:19.449 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:19.449 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:19.449 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:19.449 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:19.449 02:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:19.449 00:09:19.449 real 0m3.308s 00:09:19.449 user 0m4.153s 00:09:19.449 sys 0m0.568s 00:09:19.449 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.449 02:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.450 ************************************ 00:09:19.450 END TEST raid_read_error_test 00:09:19.450 ************************************ 00:09:19.450 02:23:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:19.450 02:23:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:19.450 02:23:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.450 02:23:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:19.450 ************************************ 00:09:19.450 START TEST raid_write_error_test 00:09:19.450 ************************************ 00:09:19.450 02:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:09:19.450 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:19.450 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:19.450 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:19.450 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:19.450 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.450 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:19.451 02:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:19.451 02:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CgMUspKrm9 00:09:19.451 02:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:19.451 02:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76500 00:09:19.451 02:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76500 00:09:19.451 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76500 ']' 00:09:19.451 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.451 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.452 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.452 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.452 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.452 [2024-10-13 02:23:38.090151] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:19.452 [2024-10-13 02:23:38.090321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76500 ] 00:09:19.714 [2024-10-13 02:23:38.243657] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.714 [2024-10-13 02:23:38.288497] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.714 [2024-10-13 02:23:38.330181] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.714 [2024-10-13 02:23:38.330224] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.283 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.283 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:20.283 02:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.283 02:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:20.283 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.283 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.283 BaseBdev1_malloc 00:09:20.542 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.542 02:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:20.542 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.542 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.542 true 00:09:20.542 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.542 02:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:20.542 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.542 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.542 [2024-10-13 02:23:38.984160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:20.542 [2024-10-13 02:23:38.984226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.542 [2024-10-13 02:23:38.984252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:20.542 [2024-10-13 02:23:38.984268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.542 [2024-10-13 02:23:38.986387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.542 [2024-10-13 02:23:38.986436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:20.542 BaseBdev1 00:09:20.542 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.542 02:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.542 02:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:20.542 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.542 02:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.542 BaseBdev2_malloc 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.542 true 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.542 [2024-10-13 02:23:39.035274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:20.542 [2024-10-13 02:23:39.035331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.542 [2024-10-13 02:23:39.035349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:20.542 [2024-10-13 02:23:39.035358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.542 [2024-10-13 02:23:39.037326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.542 [2024-10-13 02:23:39.037360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:20.542 BaseBdev2 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.542 BaseBdev3_malloc 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.542 true 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.542 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.542 [2024-10-13 02:23:39.075692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:20.542 [2024-10-13 02:23:39.075734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.542 [2024-10-13 02:23:39.075753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:20.543 [2024-10-13 02:23:39.075761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.543 [2024-10-13 02:23:39.077755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.543 [2024-10-13 02:23:39.077789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:20.543 BaseBdev3 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.543 [2024-10-13 02:23:39.087744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.543 [2024-10-13 02:23:39.089525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.543 [2024-10-13 02:23:39.089601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:20.543 [2024-10-13 02:23:39.089768] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:20.543 [2024-10-13 02:23:39.089782] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:20.543 [2024-10-13 02:23:39.090019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:20.543 [2024-10-13 02:23:39.090145] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:20.543 [2024-10-13 02:23:39.090156] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:20.543 [2024-10-13 02:23:39.090272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.543 "name": "raid_bdev1", 00:09:20.543 "uuid": "f85106b7-6265-4aad-83b3-ed4cb989cea3", 00:09:20.543 "strip_size_kb": 64, 00:09:20.543 "state": "online", 00:09:20.543 "raid_level": "raid0", 00:09:20.543 "superblock": true, 00:09:20.543 "num_base_bdevs": 3, 00:09:20.543 "num_base_bdevs_discovered": 3, 00:09:20.543 "num_base_bdevs_operational": 3, 00:09:20.543 "base_bdevs_list": [ 00:09:20.543 { 00:09:20.543 "name": "BaseBdev1", 00:09:20.543 "uuid": "eaae3590-43ea-592a-8a54-fbdcdd094ef7", 00:09:20.543 "is_configured": true, 00:09:20.543 "data_offset": 2048, 00:09:20.543 "data_size": 63488 00:09:20.543 }, 00:09:20.543 { 00:09:20.543 "name": "BaseBdev2", 00:09:20.543 "uuid": "aa5e4978-c750-5ae5-b4ac-8e10b8076e1b", 00:09:20.543 "is_configured": true, 00:09:20.543 "data_offset": 2048, 00:09:20.543 "data_size": 63488 00:09:20.543 }, 00:09:20.543 { 00:09:20.543 "name": "BaseBdev3", 00:09:20.543 "uuid": "e2c5063c-bf34-52e7-981d-8d651f340ec1", 00:09:20.543 "is_configured": true, 00:09:20.543 "data_offset": 2048, 00:09:20.543 "data_size": 63488 00:09:20.543 } 00:09:20.543 ] 00:09:20.543 }' 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.543 02:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.111 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:21.111 02:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:21.111 [2024-10-13 02:23:39.631421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.047 "name": "raid_bdev1", 00:09:22.047 "uuid": "f85106b7-6265-4aad-83b3-ed4cb989cea3", 00:09:22.047 "strip_size_kb": 64, 00:09:22.047 "state": "online", 00:09:22.047 "raid_level": "raid0", 00:09:22.047 "superblock": true, 00:09:22.047 "num_base_bdevs": 3, 00:09:22.047 "num_base_bdevs_discovered": 3, 00:09:22.047 "num_base_bdevs_operational": 3, 00:09:22.047 "base_bdevs_list": [ 00:09:22.047 { 00:09:22.047 "name": "BaseBdev1", 00:09:22.047 "uuid": "eaae3590-43ea-592a-8a54-fbdcdd094ef7", 00:09:22.047 "is_configured": true, 00:09:22.047 "data_offset": 2048, 00:09:22.047 "data_size": 63488 00:09:22.047 }, 00:09:22.047 { 00:09:22.047 "name": "BaseBdev2", 00:09:22.047 "uuid": "aa5e4978-c750-5ae5-b4ac-8e10b8076e1b", 00:09:22.047 "is_configured": true, 00:09:22.047 "data_offset": 2048, 00:09:22.047 "data_size": 63488 00:09:22.047 }, 00:09:22.047 { 00:09:22.047 "name": "BaseBdev3", 00:09:22.047 "uuid": "e2c5063c-bf34-52e7-981d-8d651f340ec1", 00:09:22.047 "is_configured": true, 00:09:22.047 "data_offset": 2048, 00:09:22.047 "data_size": 63488 00:09:22.047 } 00:09:22.047 ] 00:09:22.047 }' 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.047 02:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.306 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:22.306 02:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.306 02:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.306 [2024-10-13 02:23:40.982930] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:22.306 [2024-10-13 02:23:40.982975] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:22.306 [2024-10-13 02:23:40.985508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.306 [2024-10-13 02:23:40.985560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.306 [2024-10-13 02:23:40.985595] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.306 [2024-10-13 02:23:40.985615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:22.565 { 00:09:22.565 "results": [ 00:09:22.565 { 00:09:22.565 "job": "raid_bdev1", 00:09:22.565 "core_mask": "0x1", 00:09:22.565 "workload": "randrw", 00:09:22.565 "percentage": 50, 00:09:22.565 "status": "finished", 00:09:22.565 "queue_depth": 1, 00:09:22.565 "io_size": 131072, 00:09:22.565 "runtime": 1.352379, 00:09:22.565 "iops": 16907.242718202517, 00:09:22.565 "mibps": 2113.4053397753146, 00:09:22.565 "io_failed": 1, 00:09:22.565 "io_timeout": 0, 00:09:22.565 "avg_latency_us": 82.04571016940544, 00:09:22.565 "min_latency_us": 25.041048034934498, 00:09:22.565 "max_latency_us": 1395.1441048034935 00:09:22.565 } 00:09:22.565 ], 00:09:22.565 "core_count": 1 00:09:22.565 } 00:09:22.565 02:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.565 02:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76500 00:09:22.565 02:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76500 ']' 00:09:22.565 02:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76500 00:09:22.565 02:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:22.565 02:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.565 02:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76500 00:09:22.565 killing process with pid 76500 00:09:22.565 02:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:22.565 02:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:22.565 02:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76500' 00:09:22.565 02:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76500 00:09:22.565 [2024-10-13 02:23:41.031430] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.565 02:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76500 00:09:22.565 [2024-10-13 02:23:41.056219] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:22.823 02:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:22.823 02:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CgMUspKrm9 00:09:22.823 02:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:22.823 02:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:22.823 02:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:22.823 02:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:22.823 02:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:22.823 02:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:22.823 00:09:22.823 real 0m3.320s 00:09:22.823 user 0m4.181s 00:09:22.823 sys 0m0.552s 00:09:22.823 02:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.823 02:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.823 ************************************ 00:09:22.823 END TEST raid_write_error_test 00:09:22.823 ************************************ 00:09:22.823 02:23:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:22.823 02:23:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:22.823 02:23:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:22.823 02:23:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.823 02:23:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:22.823 ************************************ 00:09:22.824 START TEST raid_state_function_test 00:09:22.824 ************************************ 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76627 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:22.824 Process raid pid: 76627 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76627' 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76627 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 76627 ']' 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.824 02:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.824 [2024-10-13 02:23:41.484828] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:22.824 [2024-10-13 02:23:41.484974] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.082 [2024-10-13 02:23:41.631775] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.082 [2024-10-13 02:23:41.680713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.082 [2024-10-13 02:23:41.722840] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.082 [2024-10-13 02:23:41.722886] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.649 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.649 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:23.649 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.649 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.649 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.649 [2024-10-13 02:23:42.324186] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.649 [2024-10-13 02:23:42.324253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.649 [2024-10-13 02:23:42.324265] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.649 [2024-10-13 02:23:42.324274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.649 [2024-10-13 02:23:42.324281] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:23.649 [2024-10-13 02:23:42.324294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:23.649 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.649 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:23.649 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.649 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.649 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.907 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.907 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.907 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.907 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.907 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.907 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.907 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.907 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.907 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.908 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.908 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.908 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.908 "name": "Existed_Raid", 00:09:23.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.908 "strip_size_kb": 64, 00:09:23.908 "state": "configuring", 00:09:23.908 "raid_level": "concat", 00:09:23.908 "superblock": false, 00:09:23.908 "num_base_bdevs": 3, 00:09:23.908 "num_base_bdevs_discovered": 0, 00:09:23.908 "num_base_bdevs_operational": 3, 00:09:23.908 "base_bdevs_list": [ 00:09:23.908 { 00:09:23.908 "name": "BaseBdev1", 00:09:23.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.908 "is_configured": false, 00:09:23.908 "data_offset": 0, 00:09:23.908 "data_size": 0 00:09:23.908 }, 00:09:23.908 { 00:09:23.908 "name": "BaseBdev2", 00:09:23.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.908 "is_configured": false, 00:09:23.908 "data_offset": 0, 00:09:23.908 "data_size": 0 00:09:23.908 }, 00:09:23.908 { 00:09:23.908 "name": "BaseBdev3", 00:09:23.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.908 "is_configured": false, 00:09:23.908 "data_offset": 0, 00:09:23.908 "data_size": 0 00:09:23.908 } 00:09:23.908 ] 00:09:23.908 }' 00:09:23.908 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.908 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.168 [2024-10-13 02:23:42.771267] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:24.168 [2024-10-13 02:23:42.771319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.168 [2024-10-13 02:23:42.783257] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:24.168 [2024-10-13 02:23:42.783295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:24.168 [2024-10-13 02:23:42.783304] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.168 [2024-10-13 02:23:42.783314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.168 [2024-10-13 02:23:42.783320] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:24.168 [2024-10-13 02:23:42.783328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.168 [2024-10-13 02:23:42.804102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.168 BaseBdev1 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.168 [ 00:09:24.168 { 00:09:24.168 "name": "BaseBdev1", 00:09:24.168 "aliases": [ 00:09:24.168 "10c69fe4-44e1-4615-8dc2-0e68b43eab85" 00:09:24.168 ], 00:09:24.168 "product_name": "Malloc disk", 00:09:24.168 "block_size": 512, 00:09:24.168 "num_blocks": 65536, 00:09:24.168 "uuid": "10c69fe4-44e1-4615-8dc2-0e68b43eab85", 00:09:24.168 "assigned_rate_limits": { 00:09:24.168 "rw_ios_per_sec": 0, 00:09:24.168 "rw_mbytes_per_sec": 0, 00:09:24.168 "r_mbytes_per_sec": 0, 00:09:24.168 "w_mbytes_per_sec": 0 00:09:24.168 }, 00:09:24.168 "claimed": true, 00:09:24.168 "claim_type": "exclusive_write", 00:09:24.168 "zoned": false, 00:09:24.168 "supported_io_types": { 00:09:24.168 "read": true, 00:09:24.168 "write": true, 00:09:24.168 "unmap": true, 00:09:24.168 "flush": true, 00:09:24.168 "reset": true, 00:09:24.168 "nvme_admin": false, 00:09:24.168 "nvme_io": false, 00:09:24.168 "nvme_io_md": false, 00:09:24.168 "write_zeroes": true, 00:09:24.168 "zcopy": true, 00:09:24.168 "get_zone_info": false, 00:09:24.168 "zone_management": false, 00:09:24.168 "zone_append": false, 00:09:24.168 "compare": false, 00:09:24.168 "compare_and_write": false, 00:09:24.168 "abort": true, 00:09:24.168 "seek_hole": false, 00:09:24.168 "seek_data": false, 00:09:24.168 "copy": true, 00:09:24.168 "nvme_iov_md": false 00:09:24.168 }, 00:09:24.168 "memory_domains": [ 00:09:24.168 { 00:09:24.168 "dma_device_id": "system", 00:09:24.168 "dma_device_type": 1 00:09:24.168 }, 00:09:24.168 { 00:09:24.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.168 "dma_device_type": 2 00:09:24.168 } 00:09:24.168 ], 00:09:24.168 "driver_specific": {} 00:09:24.168 } 00:09:24.168 ] 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.168 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.426 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.426 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.426 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.426 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.426 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.427 "name": "Existed_Raid", 00:09:24.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.427 "strip_size_kb": 64, 00:09:24.427 "state": "configuring", 00:09:24.427 "raid_level": "concat", 00:09:24.427 "superblock": false, 00:09:24.427 "num_base_bdevs": 3, 00:09:24.427 "num_base_bdevs_discovered": 1, 00:09:24.427 "num_base_bdevs_operational": 3, 00:09:24.427 "base_bdevs_list": [ 00:09:24.427 { 00:09:24.427 "name": "BaseBdev1", 00:09:24.427 "uuid": "10c69fe4-44e1-4615-8dc2-0e68b43eab85", 00:09:24.427 "is_configured": true, 00:09:24.427 "data_offset": 0, 00:09:24.427 "data_size": 65536 00:09:24.427 }, 00:09:24.427 { 00:09:24.427 "name": "BaseBdev2", 00:09:24.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.427 "is_configured": false, 00:09:24.427 "data_offset": 0, 00:09:24.427 "data_size": 0 00:09:24.427 }, 00:09:24.427 { 00:09:24.427 "name": "BaseBdev3", 00:09:24.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.427 "is_configured": false, 00:09:24.427 "data_offset": 0, 00:09:24.427 "data_size": 0 00:09:24.427 } 00:09:24.427 ] 00:09:24.427 }' 00:09:24.427 02:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.427 02:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.684 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:24.684 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.684 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.684 [2024-10-13 02:23:43.259365] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:24.685 [2024-10-13 02:23:43.259466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.685 [2024-10-13 02:23:43.271400] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.685 [2024-10-13 02:23:43.273268] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.685 [2024-10-13 02:23:43.273346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.685 [2024-10-13 02:23:43.273374] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:24.685 [2024-10-13 02:23:43.273406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.685 "name": "Existed_Raid", 00:09:24.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.685 "strip_size_kb": 64, 00:09:24.685 "state": "configuring", 00:09:24.685 "raid_level": "concat", 00:09:24.685 "superblock": false, 00:09:24.685 "num_base_bdevs": 3, 00:09:24.685 "num_base_bdevs_discovered": 1, 00:09:24.685 "num_base_bdevs_operational": 3, 00:09:24.685 "base_bdevs_list": [ 00:09:24.685 { 00:09:24.685 "name": "BaseBdev1", 00:09:24.685 "uuid": "10c69fe4-44e1-4615-8dc2-0e68b43eab85", 00:09:24.685 "is_configured": true, 00:09:24.685 "data_offset": 0, 00:09:24.685 "data_size": 65536 00:09:24.685 }, 00:09:24.685 { 00:09:24.685 "name": "BaseBdev2", 00:09:24.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.685 "is_configured": false, 00:09:24.685 "data_offset": 0, 00:09:24.685 "data_size": 0 00:09:24.685 }, 00:09:24.685 { 00:09:24.685 "name": "BaseBdev3", 00:09:24.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.685 "is_configured": false, 00:09:24.685 "data_offset": 0, 00:09:24.685 "data_size": 0 00:09:24.685 } 00:09:24.685 ] 00:09:24.685 }' 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.685 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.251 [2024-10-13 02:23:43.718375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.251 BaseBdev2 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.251 [ 00:09:25.251 { 00:09:25.251 "name": "BaseBdev2", 00:09:25.251 "aliases": [ 00:09:25.251 "c29fa6f7-d2e0-4c06-ba8a-af7bce6311c6" 00:09:25.251 ], 00:09:25.251 "product_name": "Malloc disk", 00:09:25.251 "block_size": 512, 00:09:25.251 "num_blocks": 65536, 00:09:25.251 "uuid": "c29fa6f7-d2e0-4c06-ba8a-af7bce6311c6", 00:09:25.251 "assigned_rate_limits": { 00:09:25.251 "rw_ios_per_sec": 0, 00:09:25.251 "rw_mbytes_per_sec": 0, 00:09:25.251 "r_mbytes_per_sec": 0, 00:09:25.251 "w_mbytes_per_sec": 0 00:09:25.251 }, 00:09:25.251 "claimed": true, 00:09:25.251 "claim_type": "exclusive_write", 00:09:25.251 "zoned": false, 00:09:25.251 "supported_io_types": { 00:09:25.251 "read": true, 00:09:25.251 "write": true, 00:09:25.251 "unmap": true, 00:09:25.251 "flush": true, 00:09:25.251 "reset": true, 00:09:25.251 "nvme_admin": false, 00:09:25.251 "nvme_io": false, 00:09:25.251 "nvme_io_md": false, 00:09:25.251 "write_zeroes": true, 00:09:25.251 "zcopy": true, 00:09:25.251 "get_zone_info": false, 00:09:25.251 "zone_management": false, 00:09:25.251 "zone_append": false, 00:09:25.251 "compare": false, 00:09:25.251 "compare_and_write": false, 00:09:25.251 "abort": true, 00:09:25.251 "seek_hole": false, 00:09:25.251 "seek_data": false, 00:09:25.251 "copy": true, 00:09:25.251 "nvme_iov_md": false 00:09:25.251 }, 00:09:25.251 "memory_domains": [ 00:09:25.251 { 00:09:25.251 "dma_device_id": "system", 00:09:25.251 "dma_device_type": 1 00:09:25.251 }, 00:09:25.251 { 00:09:25.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.251 "dma_device_type": 2 00:09:25.251 } 00:09:25.251 ], 00:09:25.251 "driver_specific": {} 00:09:25.251 } 00:09:25.251 ] 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.251 "name": "Existed_Raid", 00:09:25.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.251 "strip_size_kb": 64, 00:09:25.251 "state": "configuring", 00:09:25.251 "raid_level": "concat", 00:09:25.251 "superblock": false, 00:09:25.251 "num_base_bdevs": 3, 00:09:25.251 "num_base_bdevs_discovered": 2, 00:09:25.251 "num_base_bdevs_operational": 3, 00:09:25.251 "base_bdevs_list": [ 00:09:25.251 { 00:09:25.251 "name": "BaseBdev1", 00:09:25.251 "uuid": "10c69fe4-44e1-4615-8dc2-0e68b43eab85", 00:09:25.251 "is_configured": true, 00:09:25.251 "data_offset": 0, 00:09:25.251 "data_size": 65536 00:09:25.251 }, 00:09:25.251 { 00:09:25.251 "name": "BaseBdev2", 00:09:25.251 "uuid": "c29fa6f7-d2e0-4c06-ba8a-af7bce6311c6", 00:09:25.251 "is_configured": true, 00:09:25.251 "data_offset": 0, 00:09:25.251 "data_size": 65536 00:09:25.251 }, 00:09:25.251 { 00:09:25.251 "name": "BaseBdev3", 00:09:25.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.251 "is_configured": false, 00:09:25.251 "data_offset": 0, 00:09:25.251 "data_size": 0 00:09:25.251 } 00:09:25.251 ] 00:09:25.251 }' 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.251 02:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.510 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:25.510 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.510 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.510 [2024-10-13 02:23:44.172598] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.510 [2024-10-13 02:23:44.172707] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:25.510 [2024-10-13 02:23:44.172736] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:25.510 [2024-10-13 02:23:44.173024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:25.510 [2024-10-13 02:23:44.173216] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:25.510 [2024-10-13 02:23:44.173256] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:25.510 [2024-10-13 02:23:44.173508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.510 BaseBdev3 00:09:25.510 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.510 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:25.510 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:25.510 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:25.510 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:25.510 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:25.510 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:25.510 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:25.510 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.510 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.510 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.510 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:25.510 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.510 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.769 [ 00:09:25.769 { 00:09:25.769 "name": "BaseBdev3", 00:09:25.769 "aliases": [ 00:09:25.769 "63093f5e-f1ec-48c4-ac85-ec9f7713d7cd" 00:09:25.769 ], 00:09:25.769 "product_name": "Malloc disk", 00:09:25.769 "block_size": 512, 00:09:25.769 "num_blocks": 65536, 00:09:25.769 "uuid": "63093f5e-f1ec-48c4-ac85-ec9f7713d7cd", 00:09:25.769 "assigned_rate_limits": { 00:09:25.769 "rw_ios_per_sec": 0, 00:09:25.769 "rw_mbytes_per_sec": 0, 00:09:25.769 "r_mbytes_per_sec": 0, 00:09:25.769 "w_mbytes_per_sec": 0 00:09:25.769 }, 00:09:25.769 "claimed": true, 00:09:25.769 "claim_type": "exclusive_write", 00:09:25.769 "zoned": false, 00:09:25.769 "supported_io_types": { 00:09:25.769 "read": true, 00:09:25.769 "write": true, 00:09:25.769 "unmap": true, 00:09:25.769 "flush": true, 00:09:25.769 "reset": true, 00:09:25.769 "nvme_admin": false, 00:09:25.769 "nvme_io": false, 00:09:25.769 "nvme_io_md": false, 00:09:25.769 "write_zeroes": true, 00:09:25.769 "zcopy": true, 00:09:25.769 "get_zone_info": false, 00:09:25.769 "zone_management": false, 00:09:25.769 "zone_append": false, 00:09:25.769 "compare": false, 00:09:25.769 "compare_and_write": false, 00:09:25.769 "abort": true, 00:09:25.769 "seek_hole": false, 00:09:25.769 "seek_data": false, 00:09:25.769 "copy": true, 00:09:25.769 "nvme_iov_md": false 00:09:25.769 }, 00:09:25.769 "memory_domains": [ 00:09:25.769 { 00:09:25.769 "dma_device_id": "system", 00:09:25.769 "dma_device_type": 1 00:09:25.769 }, 00:09:25.769 { 00:09:25.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.769 "dma_device_type": 2 00:09:25.769 } 00:09:25.769 ], 00:09:25.769 "driver_specific": {} 00:09:25.769 } 00:09:25.769 ] 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.769 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.769 "name": "Existed_Raid", 00:09:25.769 "uuid": "da52a280-ff13-479d-a72d-fa76e2dc4be7", 00:09:25.769 "strip_size_kb": 64, 00:09:25.769 "state": "online", 00:09:25.769 "raid_level": "concat", 00:09:25.769 "superblock": false, 00:09:25.769 "num_base_bdevs": 3, 00:09:25.769 "num_base_bdevs_discovered": 3, 00:09:25.769 "num_base_bdevs_operational": 3, 00:09:25.769 "base_bdevs_list": [ 00:09:25.769 { 00:09:25.769 "name": "BaseBdev1", 00:09:25.770 "uuid": "10c69fe4-44e1-4615-8dc2-0e68b43eab85", 00:09:25.770 "is_configured": true, 00:09:25.770 "data_offset": 0, 00:09:25.770 "data_size": 65536 00:09:25.770 }, 00:09:25.770 { 00:09:25.770 "name": "BaseBdev2", 00:09:25.770 "uuid": "c29fa6f7-d2e0-4c06-ba8a-af7bce6311c6", 00:09:25.770 "is_configured": true, 00:09:25.770 "data_offset": 0, 00:09:25.770 "data_size": 65536 00:09:25.770 }, 00:09:25.770 { 00:09:25.770 "name": "BaseBdev3", 00:09:25.770 "uuid": "63093f5e-f1ec-48c4-ac85-ec9f7713d7cd", 00:09:25.770 "is_configured": true, 00:09:25.770 "data_offset": 0, 00:09:25.770 "data_size": 65536 00:09:25.770 } 00:09:25.770 ] 00:09:25.770 }' 00:09:25.770 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.770 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.028 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:26.028 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:26.028 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:26.028 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:26.028 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:26.028 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:26.028 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:26.028 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:26.028 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.028 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.028 [2024-10-13 02:23:44.692112] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:26.287 "name": "Existed_Raid", 00:09:26.287 "aliases": [ 00:09:26.287 "da52a280-ff13-479d-a72d-fa76e2dc4be7" 00:09:26.287 ], 00:09:26.287 "product_name": "Raid Volume", 00:09:26.287 "block_size": 512, 00:09:26.287 "num_blocks": 196608, 00:09:26.287 "uuid": "da52a280-ff13-479d-a72d-fa76e2dc4be7", 00:09:26.287 "assigned_rate_limits": { 00:09:26.287 "rw_ios_per_sec": 0, 00:09:26.287 "rw_mbytes_per_sec": 0, 00:09:26.287 "r_mbytes_per_sec": 0, 00:09:26.287 "w_mbytes_per_sec": 0 00:09:26.287 }, 00:09:26.287 "claimed": false, 00:09:26.287 "zoned": false, 00:09:26.287 "supported_io_types": { 00:09:26.287 "read": true, 00:09:26.287 "write": true, 00:09:26.287 "unmap": true, 00:09:26.287 "flush": true, 00:09:26.287 "reset": true, 00:09:26.287 "nvme_admin": false, 00:09:26.287 "nvme_io": false, 00:09:26.287 "nvme_io_md": false, 00:09:26.287 "write_zeroes": true, 00:09:26.287 "zcopy": false, 00:09:26.287 "get_zone_info": false, 00:09:26.287 "zone_management": false, 00:09:26.287 "zone_append": false, 00:09:26.287 "compare": false, 00:09:26.287 "compare_and_write": false, 00:09:26.287 "abort": false, 00:09:26.287 "seek_hole": false, 00:09:26.287 "seek_data": false, 00:09:26.287 "copy": false, 00:09:26.287 "nvme_iov_md": false 00:09:26.287 }, 00:09:26.287 "memory_domains": [ 00:09:26.287 { 00:09:26.287 "dma_device_id": "system", 00:09:26.287 "dma_device_type": 1 00:09:26.287 }, 00:09:26.287 { 00:09:26.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.287 "dma_device_type": 2 00:09:26.287 }, 00:09:26.287 { 00:09:26.287 "dma_device_id": "system", 00:09:26.287 "dma_device_type": 1 00:09:26.287 }, 00:09:26.287 { 00:09:26.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.287 "dma_device_type": 2 00:09:26.287 }, 00:09:26.287 { 00:09:26.287 "dma_device_id": "system", 00:09:26.287 "dma_device_type": 1 00:09:26.287 }, 00:09:26.287 { 00:09:26.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.287 "dma_device_type": 2 00:09:26.287 } 00:09:26.287 ], 00:09:26.287 "driver_specific": { 00:09:26.287 "raid": { 00:09:26.287 "uuid": "da52a280-ff13-479d-a72d-fa76e2dc4be7", 00:09:26.287 "strip_size_kb": 64, 00:09:26.287 "state": "online", 00:09:26.287 "raid_level": "concat", 00:09:26.287 "superblock": false, 00:09:26.287 "num_base_bdevs": 3, 00:09:26.287 "num_base_bdevs_discovered": 3, 00:09:26.287 "num_base_bdevs_operational": 3, 00:09:26.287 "base_bdevs_list": [ 00:09:26.287 { 00:09:26.287 "name": "BaseBdev1", 00:09:26.287 "uuid": "10c69fe4-44e1-4615-8dc2-0e68b43eab85", 00:09:26.287 "is_configured": true, 00:09:26.287 "data_offset": 0, 00:09:26.287 "data_size": 65536 00:09:26.287 }, 00:09:26.287 { 00:09:26.287 "name": "BaseBdev2", 00:09:26.287 "uuid": "c29fa6f7-d2e0-4c06-ba8a-af7bce6311c6", 00:09:26.287 "is_configured": true, 00:09:26.287 "data_offset": 0, 00:09:26.287 "data_size": 65536 00:09:26.287 }, 00:09:26.287 { 00:09:26.287 "name": "BaseBdev3", 00:09:26.287 "uuid": "63093f5e-f1ec-48c4-ac85-ec9f7713d7cd", 00:09:26.287 "is_configured": true, 00:09:26.287 "data_offset": 0, 00:09:26.287 "data_size": 65536 00:09:26.287 } 00:09:26.287 ] 00:09:26.287 } 00:09:26.287 } 00:09:26.287 }' 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:26.287 BaseBdev2 00:09:26.287 BaseBdev3' 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.287 [2024-10-13 02:23:44.939445] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:26.287 [2024-10-13 02:23:44.939564] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.287 [2024-10-13 02:23:44.939647] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.287 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:26.288 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:26.288 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:26.288 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:26.288 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:26.288 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:26.288 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.288 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:26.288 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.288 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.288 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:26.288 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.288 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.288 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.288 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.288 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.288 02:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.288 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.288 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.546 02:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.546 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.546 "name": "Existed_Raid", 00:09:26.546 "uuid": "da52a280-ff13-479d-a72d-fa76e2dc4be7", 00:09:26.546 "strip_size_kb": 64, 00:09:26.546 "state": "offline", 00:09:26.546 "raid_level": "concat", 00:09:26.546 "superblock": false, 00:09:26.546 "num_base_bdevs": 3, 00:09:26.546 "num_base_bdevs_discovered": 2, 00:09:26.546 "num_base_bdevs_operational": 2, 00:09:26.546 "base_bdevs_list": [ 00:09:26.546 { 00:09:26.546 "name": null, 00:09:26.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.546 "is_configured": false, 00:09:26.546 "data_offset": 0, 00:09:26.546 "data_size": 65536 00:09:26.546 }, 00:09:26.546 { 00:09:26.546 "name": "BaseBdev2", 00:09:26.546 "uuid": "c29fa6f7-d2e0-4c06-ba8a-af7bce6311c6", 00:09:26.546 "is_configured": true, 00:09:26.546 "data_offset": 0, 00:09:26.546 "data_size": 65536 00:09:26.546 }, 00:09:26.546 { 00:09:26.546 "name": "BaseBdev3", 00:09:26.546 "uuid": "63093f5e-f1ec-48c4-ac85-ec9f7713d7cd", 00:09:26.546 "is_configured": true, 00:09:26.546 "data_offset": 0, 00:09:26.546 "data_size": 65536 00:09:26.546 } 00:09:26.546 ] 00:09:26.546 }' 00:09:26.546 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.546 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.805 [2024-10-13 02:23:45.442052] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.805 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.066 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:27.066 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:27.066 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:27.066 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.066 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.066 [2024-10-13 02:23:45.493195] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:27.066 [2024-10-13 02:23:45.493293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:27.066 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.066 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:27.066 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:27.066 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.067 BaseBdev2 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.067 [ 00:09:27.067 { 00:09:27.067 "name": "BaseBdev2", 00:09:27.067 "aliases": [ 00:09:27.067 "41d71936-52a2-4788-bb3b-071911fe86f3" 00:09:27.067 ], 00:09:27.067 "product_name": "Malloc disk", 00:09:27.067 "block_size": 512, 00:09:27.067 "num_blocks": 65536, 00:09:27.067 "uuid": "41d71936-52a2-4788-bb3b-071911fe86f3", 00:09:27.067 "assigned_rate_limits": { 00:09:27.067 "rw_ios_per_sec": 0, 00:09:27.067 "rw_mbytes_per_sec": 0, 00:09:27.067 "r_mbytes_per_sec": 0, 00:09:27.067 "w_mbytes_per_sec": 0 00:09:27.067 }, 00:09:27.067 "claimed": false, 00:09:27.067 "zoned": false, 00:09:27.067 "supported_io_types": { 00:09:27.067 "read": true, 00:09:27.067 "write": true, 00:09:27.067 "unmap": true, 00:09:27.067 "flush": true, 00:09:27.067 "reset": true, 00:09:27.067 "nvme_admin": false, 00:09:27.067 "nvme_io": false, 00:09:27.067 "nvme_io_md": false, 00:09:27.067 "write_zeroes": true, 00:09:27.067 "zcopy": true, 00:09:27.067 "get_zone_info": false, 00:09:27.067 "zone_management": false, 00:09:27.067 "zone_append": false, 00:09:27.067 "compare": false, 00:09:27.067 "compare_and_write": false, 00:09:27.067 "abort": true, 00:09:27.067 "seek_hole": false, 00:09:27.067 "seek_data": false, 00:09:27.067 "copy": true, 00:09:27.067 "nvme_iov_md": false 00:09:27.067 }, 00:09:27.067 "memory_domains": [ 00:09:27.067 { 00:09:27.067 "dma_device_id": "system", 00:09:27.067 "dma_device_type": 1 00:09:27.067 }, 00:09:27.067 { 00:09:27.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.067 "dma_device_type": 2 00:09:27.067 } 00:09:27.067 ], 00:09:27.067 "driver_specific": {} 00:09:27.067 } 00:09:27.067 ] 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.067 BaseBdev3 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.067 [ 00:09:27.067 { 00:09:27.067 "name": "BaseBdev3", 00:09:27.067 "aliases": [ 00:09:27.067 "7144ea9e-9f42-46a8-880c-3dd408355194" 00:09:27.067 ], 00:09:27.067 "product_name": "Malloc disk", 00:09:27.067 "block_size": 512, 00:09:27.067 "num_blocks": 65536, 00:09:27.067 "uuid": "7144ea9e-9f42-46a8-880c-3dd408355194", 00:09:27.067 "assigned_rate_limits": { 00:09:27.067 "rw_ios_per_sec": 0, 00:09:27.067 "rw_mbytes_per_sec": 0, 00:09:27.067 "r_mbytes_per_sec": 0, 00:09:27.067 "w_mbytes_per_sec": 0 00:09:27.067 }, 00:09:27.067 "claimed": false, 00:09:27.067 "zoned": false, 00:09:27.067 "supported_io_types": { 00:09:27.067 "read": true, 00:09:27.067 "write": true, 00:09:27.067 "unmap": true, 00:09:27.067 "flush": true, 00:09:27.067 "reset": true, 00:09:27.067 "nvme_admin": false, 00:09:27.067 "nvme_io": false, 00:09:27.067 "nvme_io_md": false, 00:09:27.067 "write_zeroes": true, 00:09:27.067 "zcopy": true, 00:09:27.067 "get_zone_info": false, 00:09:27.067 "zone_management": false, 00:09:27.067 "zone_append": false, 00:09:27.067 "compare": false, 00:09:27.067 "compare_and_write": false, 00:09:27.067 "abort": true, 00:09:27.067 "seek_hole": false, 00:09:27.067 "seek_data": false, 00:09:27.067 "copy": true, 00:09:27.067 "nvme_iov_md": false 00:09:27.067 }, 00:09:27.067 "memory_domains": [ 00:09:27.067 { 00:09:27.067 "dma_device_id": "system", 00:09:27.067 "dma_device_type": 1 00:09:27.067 }, 00:09:27.067 { 00:09:27.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.067 "dma_device_type": 2 00:09:27.067 } 00:09:27.067 ], 00:09:27.067 "driver_specific": {} 00:09:27.067 } 00:09:27.067 ] 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.067 [2024-10-13 02:23:45.669960] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.067 [2024-10-13 02:23:45.670076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.067 [2024-10-13 02:23:45.670122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.067 [2024-10-13 02:23:45.671999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.067 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.068 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.068 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.068 "name": "Existed_Raid", 00:09:27.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.068 "strip_size_kb": 64, 00:09:27.068 "state": "configuring", 00:09:27.068 "raid_level": "concat", 00:09:27.068 "superblock": false, 00:09:27.068 "num_base_bdevs": 3, 00:09:27.068 "num_base_bdevs_discovered": 2, 00:09:27.068 "num_base_bdevs_operational": 3, 00:09:27.068 "base_bdevs_list": [ 00:09:27.068 { 00:09:27.068 "name": "BaseBdev1", 00:09:27.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.068 "is_configured": false, 00:09:27.068 "data_offset": 0, 00:09:27.068 "data_size": 0 00:09:27.068 }, 00:09:27.068 { 00:09:27.068 "name": "BaseBdev2", 00:09:27.068 "uuid": "41d71936-52a2-4788-bb3b-071911fe86f3", 00:09:27.068 "is_configured": true, 00:09:27.068 "data_offset": 0, 00:09:27.068 "data_size": 65536 00:09:27.068 }, 00:09:27.068 { 00:09:27.068 "name": "BaseBdev3", 00:09:27.068 "uuid": "7144ea9e-9f42-46a8-880c-3dd408355194", 00:09:27.068 "is_configured": true, 00:09:27.068 "data_offset": 0, 00:09:27.068 "data_size": 65536 00:09:27.068 } 00:09:27.068 ] 00:09:27.068 }' 00:09:27.068 02:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.068 02:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.639 [2024-10-13 02:23:46.125159] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.639 "name": "Existed_Raid", 00:09:27.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.639 "strip_size_kb": 64, 00:09:27.639 "state": "configuring", 00:09:27.639 "raid_level": "concat", 00:09:27.639 "superblock": false, 00:09:27.639 "num_base_bdevs": 3, 00:09:27.639 "num_base_bdevs_discovered": 1, 00:09:27.639 "num_base_bdevs_operational": 3, 00:09:27.639 "base_bdevs_list": [ 00:09:27.639 { 00:09:27.639 "name": "BaseBdev1", 00:09:27.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.639 "is_configured": false, 00:09:27.639 "data_offset": 0, 00:09:27.639 "data_size": 0 00:09:27.639 }, 00:09:27.639 { 00:09:27.639 "name": null, 00:09:27.639 "uuid": "41d71936-52a2-4788-bb3b-071911fe86f3", 00:09:27.639 "is_configured": false, 00:09:27.639 "data_offset": 0, 00:09:27.639 "data_size": 65536 00:09:27.639 }, 00:09:27.639 { 00:09:27.639 "name": "BaseBdev3", 00:09:27.639 "uuid": "7144ea9e-9f42-46a8-880c-3dd408355194", 00:09:27.639 "is_configured": true, 00:09:27.639 "data_offset": 0, 00:09:27.639 "data_size": 65536 00:09:27.639 } 00:09:27.639 ] 00:09:27.639 }' 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.639 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.897 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:27.897 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.897 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.897 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.897 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.897 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:27.897 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:27.897 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.897 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.156 [2024-10-13 02:23:46.591551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.156 BaseBdev1 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.156 [ 00:09:28.156 { 00:09:28.156 "name": "BaseBdev1", 00:09:28.156 "aliases": [ 00:09:28.156 "84438ae7-c81a-4470-80ce-fe1d0de570ae" 00:09:28.156 ], 00:09:28.156 "product_name": "Malloc disk", 00:09:28.156 "block_size": 512, 00:09:28.156 "num_blocks": 65536, 00:09:28.156 "uuid": "84438ae7-c81a-4470-80ce-fe1d0de570ae", 00:09:28.156 "assigned_rate_limits": { 00:09:28.156 "rw_ios_per_sec": 0, 00:09:28.156 "rw_mbytes_per_sec": 0, 00:09:28.156 "r_mbytes_per_sec": 0, 00:09:28.156 "w_mbytes_per_sec": 0 00:09:28.156 }, 00:09:28.156 "claimed": true, 00:09:28.156 "claim_type": "exclusive_write", 00:09:28.156 "zoned": false, 00:09:28.156 "supported_io_types": { 00:09:28.156 "read": true, 00:09:28.156 "write": true, 00:09:28.156 "unmap": true, 00:09:28.156 "flush": true, 00:09:28.156 "reset": true, 00:09:28.156 "nvme_admin": false, 00:09:28.156 "nvme_io": false, 00:09:28.156 "nvme_io_md": false, 00:09:28.156 "write_zeroes": true, 00:09:28.156 "zcopy": true, 00:09:28.156 "get_zone_info": false, 00:09:28.156 "zone_management": false, 00:09:28.156 "zone_append": false, 00:09:28.156 "compare": false, 00:09:28.156 "compare_and_write": false, 00:09:28.156 "abort": true, 00:09:28.156 "seek_hole": false, 00:09:28.156 "seek_data": false, 00:09:28.156 "copy": true, 00:09:28.156 "nvme_iov_md": false 00:09:28.156 }, 00:09:28.156 "memory_domains": [ 00:09:28.156 { 00:09:28.156 "dma_device_id": "system", 00:09:28.156 "dma_device_type": 1 00:09:28.156 }, 00:09:28.156 { 00:09:28.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.156 "dma_device_type": 2 00:09:28.156 } 00:09:28.156 ], 00:09:28.156 "driver_specific": {} 00:09:28.156 } 00:09:28.156 ] 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.156 "name": "Existed_Raid", 00:09:28.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.156 "strip_size_kb": 64, 00:09:28.156 "state": "configuring", 00:09:28.156 "raid_level": "concat", 00:09:28.156 "superblock": false, 00:09:28.156 "num_base_bdevs": 3, 00:09:28.156 "num_base_bdevs_discovered": 2, 00:09:28.156 "num_base_bdevs_operational": 3, 00:09:28.156 "base_bdevs_list": [ 00:09:28.156 { 00:09:28.156 "name": "BaseBdev1", 00:09:28.156 "uuid": "84438ae7-c81a-4470-80ce-fe1d0de570ae", 00:09:28.156 "is_configured": true, 00:09:28.156 "data_offset": 0, 00:09:28.156 "data_size": 65536 00:09:28.156 }, 00:09:28.156 { 00:09:28.156 "name": null, 00:09:28.156 "uuid": "41d71936-52a2-4788-bb3b-071911fe86f3", 00:09:28.156 "is_configured": false, 00:09:28.156 "data_offset": 0, 00:09:28.156 "data_size": 65536 00:09:28.156 }, 00:09:28.156 { 00:09:28.156 "name": "BaseBdev3", 00:09:28.156 "uuid": "7144ea9e-9f42-46a8-880c-3dd408355194", 00:09:28.156 "is_configured": true, 00:09:28.156 "data_offset": 0, 00:09:28.156 "data_size": 65536 00:09:28.156 } 00:09:28.156 ] 00:09:28.156 }' 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.156 02:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.414 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.414 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:28.414 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.414 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.414 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.673 [2024-10-13 02:23:47.106740] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.673 "name": "Existed_Raid", 00:09:28.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.673 "strip_size_kb": 64, 00:09:28.673 "state": "configuring", 00:09:28.673 "raid_level": "concat", 00:09:28.673 "superblock": false, 00:09:28.673 "num_base_bdevs": 3, 00:09:28.673 "num_base_bdevs_discovered": 1, 00:09:28.673 "num_base_bdevs_operational": 3, 00:09:28.673 "base_bdevs_list": [ 00:09:28.673 { 00:09:28.673 "name": "BaseBdev1", 00:09:28.673 "uuid": "84438ae7-c81a-4470-80ce-fe1d0de570ae", 00:09:28.673 "is_configured": true, 00:09:28.673 "data_offset": 0, 00:09:28.673 "data_size": 65536 00:09:28.673 }, 00:09:28.673 { 00:09:28.673 "name": null, 00:09:28.673 "uuid": "41d71936-52a2-4788-bb3b-071911fe86f3", 00:09:28.673 "is_configured": false, 00:09:28.673 "data_offset": 0, 00:09:28.673 "data_size": 65536 00:09:28.673 }, 00:09:28.673 { 00:09:28.673 "name": null, 00:09:28.673 "uuid": "7144ea9e-9f42-46a8-880c-3dd408355194", 00:09:28.673 "is_configured": false, 00:09:28.673 "data_offset": 0, 00:09:28.673 "data_size": 65536 00:09:28.673 } 00:09:28.673 ] 00:09:28.673 }' 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.673 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.931 [2024-10-13 02:23:47.581986] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.931 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.190 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.190 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.190 "name": "Existed_Raid", 00:09:29.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.190 "strip_size_kb": 64, 00:09:29.190 "state": "configuring", 00:09:29.190 "raid_level": "concat", 00:09:29.190 "superblock": false, 00:09:29.190 "num_base_bdevs": 3, 00:09:29.190 "num_base_bdevs_discovered": 2, 00:09:29.190 "num_base_bdevs_operational": 3, 00:09:29.190 "base_bdevs_list": [ 00:09:29.190 { 00:09:29.190 "name": "BaseBdev1", 00:09:29.190 "uuid": "84438ae7-c81a-4470-80ce-fe1d0de570ae", 00:09:29.190 "is_configured": true, 00:09:29.190 "data_offset": 0, 00:09:29.190 "data_size": 65536 00:09:29.190 }, 00:09:29.190 { 00:09:29.190 "name": null, 00:09:29.190 "uuid": "41d71936-52a2-4788-bb3b-071911fe86f3", 00:09:29.190 "is_configured": false, 00:09:29.190 "data_offset": 0, 00:09:29.190 "data_size": 65536 00:09:29.190 }, 00:09:29.190 { 00:09:29.190 "name": "BaseBdev3", 00:09:29.190 "uuid": "7144ea9e-9f42-46a8-880c-3dd408355194", 00:09:29.190 "is_configured": true, 00:09:29.190 "data_offset": 0, 00:09:29.190 "data_size": 65536 00:09:29.190 } 00:09:29.190 ] 00:09:29.190 }' 00:09:29.190 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.190 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.450 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.450 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.450 02:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.450 02:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.450 [2024-10-13 02:23:48.041167] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.450 "name": "Existed_Raid", 00:09:29.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.450 "strip_size_kb": 64, 00:09:29.450 "state": "configuring", 00:09:29.450 "raid_level": "concat", 00:09:29.450 "superblock": false, 00:09:29.450 "num_base_bdevs": 3, 00:09:29.450 "num_base_bdevs_discovered": 1, 00:09:29.450 "num_base_bdevs_operational": 3, 00:09:29.450 "base_bdevs_list": [ 00:09:29.450 { 00:09:29.450 "name": null, 00:09:29.450 "uuid": "84438ae7-c81a-4470-80ce-fe1d0de570ae", 00:09:29.450 "is_configured": false, 00:09:29.450 "data_offset": 0, 00:09:29.450 "data_size": 65536 00:09:29.450 }, 00:09:29.450 { 00:09:29.450 "name": null, 00:09:29.450 "uuid": "41d71936-52a2-4788-bb3b-071911fe86f3", 00:09:29.450 "is_configured": false, 00:09:29.450 "data_offset": 0, 00:09:29.450 "data_size": 65536 00:09:29.450 }, 00:09:29.450 { 00:09:29.450 "name": "BaseBdev3", 00:09:29.450 "uuid": "7144ea9e-9f42-46a8-880c-3dd408355194", 00:09:29.450 "is_configured": true, 00:09:29.450 "data_offset": 0, 00:09:29.450 "data_size": 65536 00:09:29.450 } 00:09:29.450 ] 00:09:29.450 }' 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.450 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.020 [2024-10-13 02:23:48.542884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.020 "name": "Existed_Raid", 00:09:30.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.020 "strip_size_kb": 64, 00:09:30.020 "state": "configuring", 00:09:30.020 "raid_level": "concat", 00:09:30.020 "superblock": false, 00:09:30.020 "num_base_bdevs": 3, 00:09:30.020 "num_base_bdevs_discovered": 2, 00:09:30.020 "num_base_bdevs_operational": 3, 00:09:30.020 "base_bdevs_list": [ 00:09:30.020 { 00:09:30.020 "name": null, 00:09:30.020 "uuid": "84438ae7-c81a-4470-80ce-fe1d0de570ae", 00:09:30.020 "is_configured": false, 00:09:30.020 "data_offset": 0, 00:09:30.020 "data_size": 65536 00:09:30.020 }, 00:09:30.020 { 00:09:30.020 "name": "BaseBdev2", 00:09:30.020 "uuid": "41d71936-52a2-4788-bb3b-071911fe86f3", 00:09:30.020 "is_configured": true, 00:09:30.020 "data_offset": 0, 00:09:30.020 "data_size": 65536 00:09:30.020 }, 00:09:30.020 { 00:09:30.020 "name": "BaseBdev3", 00:09:30.020 "uuid": "7144ea9e-9f42-46a8-880c-3dd408355194", 00:09:30.020 "is_configured": true, 00:09:30.020 "data_offset": 0, 00:09:30.020 "data_size": 65536 00:09:30.020 } 00:09:30.020 ] 00:09:30.020 }' 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.020 02:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 84438ae7-c81a-4470-80ce-fe1d0de570ae 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.587 NewBaseBdev 00:09:30.587 [2024-10-13 02:23:49.116907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:30.587 [2024-10-13 02:23:49.116955] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:30.587 [2024-10-13 02:23:49.116964] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:30.587 [2024-10-13 02:23:49.117212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:30.587 [2024-10-13 02:23:49.117323] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:30.587 [2024-10-13 02:23:49.117332] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:30.587 [2024-10-13 02:23:49.117528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.587 [ 00:09:30.587 { 00:09:30.587 "name": "NewBaseBdev", 00:09:30.587 "aliases": [ 00:09:30.587 "84438ae7-c81a-4470-80ce-fe1d0de570ae" 00:09:30.587 ], 00:09:30.587 "product_name": "Malloc disk", 00:09:30.587 "block_size": 512, 00:09:30.587 "num_blocks": 65536, 00:09:30.587 "uuid": "84438ae7-c81a-4470-80ce-fe1d0de570ae", 00:09:30.587 "assigned_rate_limits": { 00:09:30.587 "rw_ios_per_sec": 0, 00:09:30.587 "rw_mbytes_per_sec": 0, 00:09:30.587 "r_mbytes_per_sec": 0, 00:09:30.587 "w_mbytes_per_sec": 0 00:09:30.587 }, 00:09:30.587 "claimed": true, 00:09:30.587 "claim_type": "exclusive_write", 00:09:30.587 "zoned": false, 00:09:30.587 "supported_io_types": { 00:09:30.587 "read": true, 00:09:30.587 "write": true, 00:09:30.587 "unmap": true, 00:09:30.587 "flush": true, 00:09:30.587 "reset": true, 00:09:30.587 "nvme_admin": false, 00:09:30.587 "nvme_io": false, 00:09:30.587 "nvme_io_md": false, 00:09:30.587 "write_zeroes": true, 00:09:30.587 "zcopy": true, 00:09:30.587 "get_zone_info": false, 00:09:30.587 "zone_management": false, 00:09:30.587 "zone_append": false, 00:09:30.587 "compare": false, 00:09:30.587 "compare_and_write": false, 00:09:30.587 "abort": true, 00:09:30.587 "seek_hole": false, 00:09:30.587 "seek_data": false, 00:09:30.587 "copy": true, 00:09:30.587 "nvme_iov_md": false 00:09:30.587 }, 00:09:30.587 "memory_domains": [ 00:09:30.587 { 00:09:30.587 "dma_device_id": "system", 00:09:30.587 "dma_device_type": 1 00:09:30.587 }, 00:09:30.587 { 00:09:30.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.587 "dma_device_type": 2 00:09:30.587 } 00:09:30.587 ], 00:09:30.587 "driver_specific": {} 00:09:30.587 } 00:09:30.587 ] 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.587 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.588 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.588 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.588 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.588 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.588 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.588 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.588 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.588 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.588 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.588 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.588 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.588 "name": "Existed_Raid", 00:09:30.588 "uuid": "4e841430-0633-4668-9fc3-40b61c812f9d", 00:09:30.588 "strip_size_kb": 64, 00:09:30.588 "state": "online", 00:09:30.588 "raid_level": "concat", 00:09:30.588 "superblock": false, 00:09:30.588 "num_base_bdevs": 3, 00:09:30.588 "num_base_bdevs_discovered": 3, 00:09:30.588 "num_base_bdevs_operational": 3, 00:09:30.588 "base_bdevs_list": [ 00:09:30.588 { 00:09:30.588 "name": "NewBaseBdev", 00:09:30.588 "uuid": "84438ae7-c81a-4470-80ce-fe1d0de570ae", 00:09:30.588 "is_configured": true, 00:09:30.588 "data_offset": 0, 00:09:30.588 "data_size": 65536 00:09:30.588 }, 00:09:30.588 { 00:09:30.588 "name": "BaseBdev2", 00:09:30.588 "uuid": "41d71936-52a2-4788-bb3b-071911fe86f3", 00:09:30.588 "is_configured": true, 00:09:30.588 "data_offset": 0, 00:09:30.588 "data_size": 65536 00:09:30.588 }, 00:09:30.588 { 00:09:30.588 "name": "BaseBdev3", 00:09:30.588 "uuid": "7144ea9e-9f42-46a8-880c-3dd408355194", 00:09:30.588 "is_configured": true, 00:09:30.588 "data_offset": 0, 00:09:30.588 "data_size": 65536 00:09:30.588 } 00:09:30.588 ] 00:09:30.588 }' 00:09:30.588 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.588 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.155 [2024-10-13 02:23:49.620417] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:31.155 "name": "Existed_Raid", 00:09:31.155 "aliases": [ 00:09:31.155 "4e841430-0633-4668-9fc3-40b61c812f9d" 00:09:31.155 ], 00:09:31.155 "product_name": "Raid Volume", 00:09:31.155 "block_size": 512, 00:09:31.155 "num_blocks": 196608, 00:09:31.155 "uuid": "4e841430-0633-4668-9fc3-40b61c812f9d", 00:09:31.155 "assigned_rate_limits": { 00:09:31.155 "rw_ios_per_sec": 0, 00:09:31.155 "rw_mbytes_per_sec": 0, 00:09:31.155 "r_mbytes_per_sec": 0, 00:09:31.155 "w_mbytes_per_sec": 0 00:09:31.155 }, 00:09:31.155 "claimed": false, 00:09:31.155 "zoned": false, 00:09:31.155 "supported_io_types": { 00:09:31.155 "read": true, 00:09:31.155 "write": true, 00:09:31.155 "unmap": true, 00:09:31.155 "flush": true, 00:09:31.155 "reset": true, 00:09:31.155 "nvme_admin": false, 00:09:31.155 "nvme_io": false, 00:09:31.155 "nvme_io_md": false, 00:09:31.155 "write_zeroes": true, 00:09:31.155 "zcopy": false, 00:09:31.155 "get_zone_info": false, 00:09:31.155 "zone_management": false, 00:09:31.155 "zone_append": false, 00:09:31.155 "compare": false, 00:09:31.155 "compare_and_write": false, 00:09:31.155 "abort": false, 00:09:31.155 "seek_hole": false, 00:09:31.155 "seek_data": false, 00:09:31.155 "copy": false, 00:09:31.155 "nvme_iov_md": false 00:09:31.155 }, 00:09:31.155 "memory_domains": [ 00:09:31.155 { 00:09:31.155 "dma_device_id": "system", 00:09:31.155 "dma_device_type": 1 00:09:31.155 }, 00:09:31.155 { 00:09:31.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.155 "dma_device_type": 2 00:09:31.155 }, 00:09:31.155 { 00:09:31.155 "dma_device_id": "system", 00:09:31.155 "dma_device_type": 1 00:09:31.155 }, 00:09:31.155 { 00:09:31.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.155 "dma_device_type": 2 00:09:31.155 }, 00:09:31.155 { 00:09:31.155 "dma_device_id": "system", 00:09:31.155 "dma_device_type": 1 00:09:31.155 }, 00:09:31.155 { 00:09:31.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.155 "dma_device_type": 2 00:09:31.155 } 00:09:31.155 ], 00:09:31.155 "driver_specific": { 00:09:31.155 "raid": { 00:09:31.155 "uuid": "4e841430-0633-4668-9fc3-40b61c812f9d", 00:09:31.155 "strip_size_kb": 64, 00:09:31.155 "state": "online", 00:09:31.155 "raid_level": "concat", 00:09:31.155 "superblock": false, 00:09:31.155 "num_base_bdevs": 3, 00:09:31.155 "num_base_bdevs_discovered": 3, 00:09:31.155 "num_base_bdevs_operational": 3, 00:09:31.155 "base_bdevs_list": [ 00:09:31.155 { 00:09:31.155 "name": "NewBaseBdev", 00:09:31.155 "uuid": "84438ae7-c81a-4470-80ce-fe1d0de570ae", 00:09:31.155 "is_configured": true, 00:09:31.155 "data_offset": 0, 00:09:31.155 "data_size": 65536 00:09:31.155 }, 00:09:31.155 { 00:09:31.155 "name": "BaseBdev2", 00:09:31.155 "uuid": "41d71936-52a2-4788-bb3b-071911fe86f3", 00:09:31.155 "is_configured": true, 00:09:31.155 "data_offset": 0, 00:09:31.155 "data_size": 65536 00:09:31.155 }, 00:09:31.155 { 00:09:31.155 "name": "BaseBdev3", 00:09:31.155 "uuid": "7144ea9e-9f42-46a8-880c-3dd408355194", 00:09:31.155 "is_configured": true, 00:09:31.155 "data_offset": 0, 00:09:31.155 "data_size": 65536 00:09:31.155 } 00:09:31.155 ] 00:09:31.155 } 00:09:31.155 } 00:09:31.155 }' 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:31.155 BaseBdev2 00:09:31.155 BaseBdev3' 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.155 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.414 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.414 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.414 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.414 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:31.414 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.414 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.414 [2024-10-13 02:23:49.867739] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.414 [2024-10-13 02:23:49.867857] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.414 [2024-10-13 02:23:49.867984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.414 [2024-10-13 02:23:49.868055] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.414 [2024-10-13 02:23:49.868154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:31.414 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.414 02:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76627 00:09:31.414 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 76627 ']' 00:09:31.414 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 76627 00:09:31.414 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:31.414 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.414 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76627 00:09:31.414 killing process with pid 76627 00:09:31.414 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:31.414 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:31.414 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76627' 00:09:31.414 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 76627 00:09:31.414 [2024-10-13 02:23:49.917490] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.414 02:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 76627 00:09:31.414 [2024-10-13 02:23:49.947955] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:31.672 ************************************ 00:09:31.672 END TEST raid_state_function_test 00:09:31.672 ************************************ 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:31.672 00:09:31.672 real 0m8.812s 00:09:31.672 user 0m14.980s 00:09:31.672 sys 0m1.843s 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.672 02:23:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:31.672 02:23:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:31.672 02:23:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.672 02:23:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.672 ************************************ 00:09:31.672 START TEST raid_state_function_test_sb 00:09:31.672 ************************************ 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77232 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:31.672 Process raid pid: 77232 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77232' 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77232 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77232 ']' 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.672 02:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.930 [2024-10-13 02:23:50.366358] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:31.931 [2024-10-13 02:23:50.366636] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.931 [2024-10-13 02:23:50.509366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.931 [2024-10-13 02:23:50.560711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.931 [2024-10-13 02:23:50.602538] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.931 [2024-10-13 02:23:50.602570] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.865 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:32.865 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:32.865 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:32.865 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.865 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.865 [2024-10-13 02:23:51.199570] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.865 [2024-10-13 02:23:51.199728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.865 [2024-10-13 02:23:51.199761] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.865 [2024-10-13 02:23:51.199784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.866 [2024-10-13 02:23:51.199802] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.866 [2024-10-13 02:23:51.199825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.866 "name": "Existed_Raid", 00:09:32.866 "uuid": "3a2a4575-17b3-47ec-8b2d-299e76d96538", 00:09:32.866 "strip_size_kb": 64, 00:09:32.866 "state": "configuring", 00:09:32.866 "raid_level": "concat", 00:09:32.866 "superblock": true, 00:09:32.866 "num_base_bdevs": 3, 00:09:32.866 "num_base_bdevs_discovered": 0, 00:09:32.866 "num_base_bdevs_operational": 3, 00:09:32.866 "base_bdevs_list": [ 00:09:32.866 { 00:09:32.866 "name": "BaseBdev1", 00:09:32.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.866 "is_configured": false, 00:09:32.866 "data_offset": 0, 00:09:32.866 "data_size": 0 00:09:32.866 }, 00:09:32.866 { 00:09:32.866 "name": "BaseBdev2", 00:09:32.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.866 "is_configured": false, 00:09:32.866 "data_offset": 0, 00:09:32.866 "data_size": 0 00:09:32.866 }, 00:09:32.866 { 00:09:32.866 "name": "BaseBdev3", 00:09:32.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.866 "is_configured": false, 00:09:32.866 "data_offset": 0, 00:09:32.866 "data_size": 0 00:09:32.866 } 00:09:32.866 ] 00:09:32.866 }' 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.866 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.125 [2024-10-13 02:23:51.634744] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.125 [2024-10-13 02:23:51.634898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.125 [2024-10-13 02:23:51.646716] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.125 [2024-10-13 02:23:51.646812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.125 [2024-10-13 02:23:51.646851] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.125 [2024-10-13 02:23:51.646874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.125 [2024-10-13 02:23:51.646906] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.125 [2024-10-13 02:23:51.646928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.125 [2024-10-13 02:23:51.667664] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.125 BaseBdev1 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.125 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.125 [ 00:09:33.125 { 00:09:33.125 "name": "BaseBdev1", 00:09:33.125 "aliases": [ 00:09:33.125 "027d7f3d-ce0e-4dfd-9ab8-fb95c5ad4e02" 00:09:33.125 ], 00:09:33.125 "product_name": "Malloc disk", 00:09:33.125 "block_size": 512, 00:09:33.125 "num_blocks": 65536, 00:09:33.125 "uuid": "027d7f3d-ce0e-4dfd-9ab8-fb95c5ad4e02", 00:09:33.126 "assigned_rate_limits": { 00:09:33.126 "rw_ios_per_sec": 0, 00:09:33.126 "rw_mbytes_per_sec": 0, 00:09:33.126 "r_mbytes_per_sec": 0, 00:09:33.126 "w_mbytes_per_sec": 0 00:09:33.126 }, 00:09:33.126 "claimed": true, 00:09:33.126 "claim_type": "exclusive_write", 00:09:33.126 "zoned": false, 00:09:33.126 "supported_io_types": { 00:09:33.126 "read": true, 00:09:33.126 "write": true, 00:09:33.126 "unmap": true, 00:09:33.126 "flush": true, 00:09:33.126 "reset": true, 00:09:33.126 "nvme_admin": false, 00:09:33.126 "nvme_io": false, 00:09:33.126 "nvme_io_md": false, 00:09:33.126 "write_zeroes": true, 00:09:33.126 "zcopy": true, 00:09:33.126 "get_zone_info": false, 00:09:33.126 "zone_management": false, 00:09:33.126 "zone_append": false, 00:09:33.126 "compare": false, 00:09:33.126 "compare_and_write": false, 00:09:33.126 "abort": true, 00:09:33.126 "seek_hole": false, 00:09:33.126 "seek_data": false, 00:09:33.126 "copy": true, 00:09:33.126 "nvme_iov_md": false 00:09:33.126 }, 00:09:33.126 "memory_domains": [ 00:09:33.126 { 00:09:33.126 "dma_device_id": "system", 00:09:33.126 "dma_device_type": 1 00:09:33.126 }, 00:09:33.126 { 00:09:33.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.126 "dma_device_type": 2 00:09:33.126 } 00:09:33.126 ], 00:09:33.126 "driver_specific": {} 00:09:33.126 } 00:09:33.126 ] 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.126 "name": "Existed_Raid", 00:09:33.126 "uuid": "41b3e469-e063-4746-8ed9-00edd026fc93", 00:09:33.126 "strip_size_kb": 64, 00:09:33.126 "state": "configuring", 00:09:33.126 "raid_level": "concat", 00:09:33.126 "superblock": true, 00:09:33.126 "num_base_bdevs": 3, 00:09:33.126 "num_base_bdevs_discovered": 1, 00:09:33.126 "num_base_bdevs_operational": 3, 00:09:33.126 "base_bdevs_list": [ 00:09:33.126 { 00:09:33.126 "name": "BaseBdev1", 00:09:33.126 "uuid": "027d7f3d-ce0e-4dfd-9ab8-fb95c5ad4e02", 00:09:33.126 "is_configured": true, 00:09:33.126 "data_offset": 2048, 00:09:33.126 "data_size": 63488 00:09:33.126 }, 00:09:33.126 { 00:09:33.126 "name": "BaseBdev2", 00:09:33.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.126 "is_configured": false, 00:09:33.126 "data_offset": 0, 00:09:33.126 "data_size": 0 00:09:33.126 }, 00:09:33.126 { 00:09:33.126 "name": "BaseBdev3", 00:09:33.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.126 "is_configured": false, 00:09:33.126 "data_offset": 0, 00:09:33.126 "data_size": 0 00:09:33.126 } 00:09:33.126 ] 00:09:33.126 }' 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.126 02:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.693 [2024-10-13 02:23:52.166984] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.693 [2024-10-13 02:23:52.167158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.693 [2024-10-13 02:23:52.179020] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.693 [2024-10-13 02:23:52.180967] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.693 [2024-10-13 02:23:52.181048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.693 [2024-10-13 02:23:52.181076] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.693 [2024-10-13 02:23:52.181099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.693 "name": "Existed_Raid", 00:09:33.693 "uuid": "97e4098c-6072-47fe-b99b-13cf94b04833", 00:09:33.693 "strip_size_kb": 64, 00:09:33.693 "state": "configuring", 00:09:33.693 "raid_level": "concat", 00:09:33.693 "superblock": true, 00:09:33.693 "num_base_bdevs": 3, 00:09:33.693 "num_base_bdevs_discovered": 1, 00:09:33.693 "num_base_bdevs_operational": 3, 00:09:33.693 "base_bdevs_list": [ 00:09:33.693 { 00:09:33.693 "name": "BaseBdev1", 00:09:33.693 "uuid": "027d7f3d-ce0e-4dfd-9ab8-fb95c5ad4e02", 00:09:33.693 "is_configured": true, 00:09:33.693 "data_offset": 2048, 00:09:33.693 "data_size": 63488 00:09:33.693 }, 00:09:33.693 { 00:09:33.693 "name": "BaseBdev2", 00:09:33.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.693 "is_configured": false, 00:09:33.693 "data_offset": 0, 00:09:33.693 "data_size": 0 00:09:33.693 }, 00:09:33.693 { 00:09:33.693 "name": "BaseBdev3", 00:09:33.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.693 "is_configured": false, 00:09:33.693 "data_offset": 0, 00:09:33.693 "data_size": 0 00:09:33.693 } 00:09:33.693 ] 00:09:33.693 }' 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.693 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.951 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:33.951 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.951 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.209 [2024-10-13 02:23:52.645338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.209 BaseBdev2 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.209 [ 00:09:34.209 { 00:09:34.209 "name": "BaseBdev2", 00:09:34.209 "aliases": [ 00:09:34.209 "76e1434e-a16e-4fe6-af0c-7b99c9cb2502" 00:09:34.209 ], 00:09:34.209 "product_name": "Malloc disk", 00:09:34.209 "block_size": 512, 00:09:34.209 "num_blocks": 65536, 00:09:34.209 "uuid": "76e1434e-a16e-4fe6-af0c-7b99c9cb2502", 00:09:34.209 "assigned_rate_limits": { 00:09:34.209 "rw_ios_per_sec": 0, 00:09:34.209 "rw_mbytes_per_sec": 0, 00:09:34.209 "r_mbytes_per_sec": 0, 00:09:34.209 "w_mbytes_per_sec": 0 00:09:34.209 }, 00:09:34.209 "claimed": true, 00:09:34.209 "claim_type": "exclusive_write", 00:09:34.209 "zoned": false, 00:09:34.209 "supported_io_types": { 00:09:34.209 "read": true, 00:09:34.209 "write": true, 00:09:34.209 "unmap": true, 00:09:34.209 "flush": true, 00:09:34.209 "reset": true, 00:09:34.209 "nvme_admin": false, 00:09:34.209 "nvme_io": false, 00:09:34.209 "nvme_io_md": false, 00:09:34.209 "write_zeroes": true, 00:09:34.209 "zcopy": true, 00:09:34.209 "get_zone_info": false, 00:09:34.209 "zone_management": false, 00:09:34.209 "zone_append": false, 00:09:34.209 "compare": false, 00:09:34.209 "compare_and_write": false, 00:09:34.209 "abort": true, 00:09:34.209 "seek_hole": false, 00:09:34.209 "seek_data": false, 00:09:34.209 "copy": true, 00:09:34.209 "nvme_iov_md": false 00:09:34.209 }, 00:09:34.209 "memory_domains": [ 00:09:34.209 { 00:09:34.209 "dma_device_id": "system", 00:09:34.209 "dma_device_type": 1 00:09:34.209 }, 00:09:34.209 { 00:09:34.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.209 "dma_device_type": 2 00:09:34.209 } 00:09:34.209 ], 00:09:34.209 "driver_specific": {} 00:09:34.209 } 00:09:34.209 ] 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.209 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.210 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.210 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.210 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.210 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.210 "name": "Existed_Raid", 00:09:34.210 "uuid": "97e4098c-6072-47fe-b99b-13cf94b04833", 00:09:34.210 "strip_size_kb": 64, 00:09:34.210 "state": "configuring", 00:09:34.210 "raid_level": "concat", 00:09:34.210 "superblock": true, 00:09:34.210 "num_base_bdevs": 3, 00:09:34.210 "num_base_bdevs_discovered": 2, 00:09:34.210 "num_base_bdevs_operational": 3, 00:09:34.210 "base_bdevs_list": [ 00:09:34.210 { 00:09:34.210 "name": "BaseBdev1", 00:09:34.210 "uuid": "027d7f3d-ce0e-4dfd-9ab8-fb95c5ad4e02", 00:09:34.210 "is_configured": true, 00:09:34.210 "data_offset": 2048, 00:09:34.210 "data_size": 63488 00:09:34.210 }, 00:09:34.210 { 00:09:34.210 "name": "BaseBdev2", 00:09:34.210 "uuid": "76e1434e-a16e-4fe6-af0c-7b99c9cb2502", 00:09:34.210 "is_configured": true, 00:09:34.210 "data_offset": 2048, 00:09:34.210 "data_size": 63488 00:09:34.210 }, 00:09:34.210 { 00:09:34.210 "name": "BaseBdev3", 00:09:34.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.210 "is_configured": false, 00:09:34.210 "data_offset": 0, 00:09:34.210 "data_size": 0 00:09:34.210 } 00:09:34.210 ] 00:09:34.210 }' 00:09:34.210 02:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.210 02:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.481 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:34.481 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.481 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.481 [2024-10-13 02:23:53.135520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.481 [2024-10-13 02:23:53.135718] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:34.481 [2024-10-13 02:23:53.135746] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:34.481 [2024-10-13 02:23:53.136021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:34.481 [2024-10-13 02:23:53.136142] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:34.481 [2024-10-13 02:23:53.136157] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:34.481 [2024-10-13 02:23:53.136272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.481 BaseBdev3 00:09:34.481 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.481 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:34.481 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:34.481 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:34.481 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:34.481 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:34.481 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:34.481 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:34.481 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.481 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.481 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.756 [ 00:09:34.756 { 00:09:34.756 "name": "BaseBdev3", 00:09:34.756 "aliases": [ 00:09:34.756 "dcb944e7-0706-4503-b33b-874ac27a867b" 00:09:34.756 ], 00:09:34.756 "product_name": "Malloc disk", 00:09:34.756 "block_size": 512, 00:09:34.756 "num_blocks": 65536, 00:09:34.756 "uuid": "dcb944e7-0706-4503-b33b-874ac27a867b", 00:09:34.756 "assigned_rate_limits": { 00:09:34.756 "rw_ios_per_sec": 0, 00:09:34.756 "rw_mbytes_per_sec": 0, 00:09:34.756 "r_mbytes_per_sec": 0, 00:09:34.756 "w_mbytes_per_sec": 0 00:09:34.756 }, 00:09:34.756 "claimed": true, 00:09:34.756 "claim_type": "exclusive_write", 00:09:34.756 "zoned": false, 00:09:34.756 "supported_io_types": { 00:09:34.756 "read": true, 00:09:34.756 "write": true, 00:09:34.756 "unmap": true, 00:09:34.756 "flush": true, 00:09:34.756 "reset": true, 00:09:34.756 "nvme_admin": false, 00:09:34.756 "nvme_io": false, 00:09:34.756 "nvme_io_md": false, 00:09:34.756 "write_zeroes": true, 00:09:34.756 "zcopy": true, 00:09:34.756 "get_zone_info": false, 00:09:34.756 "zone_management": false, 00:09:34.756 "zone_append": false, 00:09:34.756 "compare": false, 00:09:34.756 "compare_and_write": false, 00:09:34.756 "abort": true, 00:09:34.756 "seek_hole": false, 00:09:34.756 "seek_data": false, 00:09:34.756 "copy": true, 00:09:34.756 "nvme_iov_md": false 00:09:34.756 }, 00:09:34.756 "memory_domains": [ 00:09:34.756 { 00:09:34.756 "dma_device_id": "system", 00:09:34.756 "dma_device_type": 1 00:09:34.756 }, 00:09:34.756 { 00:09:34.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.756 "dma_device_type": 2 00:09:34.756 } 00:09:34.756 ], 00:09:34.756 "driver_specific": {} 00:09:34.756 } 00:09:34.756 ] 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.756 "name": "Existed_Raid", 00:09:34.756 "uuid": "97e4098c-6072-47fe-b99b-13cf94b04833", 00:09:34.756 "strip_size_kb": 64, 00:09:34.756 "state": "online", 00:09:34.756 "raid_level": "concat", 00:09:34.756 "superblock": true, 00:09:34.756 "num_base_bdevs": 3, 00:09:34.756 "num_base_bdevs_discovered": 3, 00:09:34.756 "num_base_bdevs_operational": 3, 00:09:34.756 "base_bdevs_list": [ 00:09:34.756 { 00:09:34.756 "name": "BaseBdev1", 00:09:34.756 "uuid": "027d7f3d-ce0e-4dfd-9ab8-fb95c5ad4e02", 00:09:34.756 "is_configured": true, 00:09:34.756 "data_offset": 2048, 00:09:34.756 "data_size": 63488 00:09:34.756 }, 00:09:34.756 { 00:09:34.756 "name": "BaseBdev2", 00:09:34.756 "uuid": "76e1434e-a16e-4fe6-af0c-7b99c9cb2502", 00:09:34.756 "is_configured": true, 00:09:34.756 "data_offset": 2048, 00:09:34.756 "data_size": 63488 00:09:34.756 }, 00:09:34.756 { 00:09:34.756 "name": "BaseBdev3", 00:09:34.756 "uuid": "dcb944e7-0706-4503-b33b-874ac27a867b", 00:09:34.756 "is_configured": true, 00:09:34.756 "data_offset": 2048, 00:09:34.756 "data_size": 63488 00:09:34.756 } 00:09:34.756 ] 00:09:34.756 }' 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.756 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.014 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.014 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:35.014 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.014 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.014 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.014 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.014 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:35.014 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.014 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.014 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.014 [2024-10-13 02:23:53.623126] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.014 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.014 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.014 "name": "Existed_Raid", 00:09:35.014 "aliases": [ 00:09:35.014 "97e4098c-6072-47fe-b99b-13cf94b04833" 00:09:35.014 ], 00:09:35.014 "product_name": "Raid Volume", 00:09:35.014 "block_size": 512, 00:09:35.014 "num_blocks": 190464, 00:09:35.014 "uuid": "97e4098c-6072-47fe-b99b-13cf94b04833", 00:09:35.014 "assigned_rate_limits": { 00:09:35.014 "rw_ios_per_sec": 0, 00:09:35.014 "rw_mbytes_per_sec": 0, 00:09:35.014 "r_mbytes_per_sec": 0, 00:09:35.014 "w_mbytes_per_sec": 0 00:09:35.014 }, 00:09:35.014 "claimed": false, 00:09:35.014 "zoned": false, 00:09:35.014 "supported_io_types": { 00:09:35.014 "read": true, 00:09:35.014 "write": true, 00:09:35.014 "unmap": true, 00:09:35.014 "flush": true, 00:09:35.014 "reset": true, 00:09:35.014 "nvme_admin": false, 00:09:35.014 "nvme_io": false, 00:09:35.014 "nvme_io_md": false, 00:09:35.014 "write_zeroes": true, 00:09:35.014 "zcopy": false, 00:09:35.014 "get_zone_info": false, 00:09:35.014 "zone_management": false, 00:09:35.014 "zone_append": false, 00:09:35.014 "compare": false, 00:09:35.014 "compare_and_write": false, 00:09:35.014 "abort": false, 00:09:35.014 "seek_hole": false, 00:09:35.014 "seek_data": false, 00:09:35.014 "copy": false, 00:09:35.014 "nvme_iov_md": false 00:09:35.014 }, 00:09:35.014 "memory_domains": [ 00:09:35.014 { 00:09:35.014 "dma_device_id": "system", 00:09:35.014 "dma_device_type": 1 00:09:35.014 }, 00:09:35.014 { 00:09:35.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.014 "dma_device_type": 2 00:09:35.014 }, 00:09:35.014 { 00:09:35.014 "dma_device_id": "system", 00:09:35.014 "dma_device_type": 1 00:09:35.014 }, 00:09:35.014 { 00:09:35.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.014 "dma_device_type": 2 00:09:35.014 }, 00:09:35.014 { 00:09:35.014 "dma_device_id": "system", 00:09:35.014 "dma_device_type": 1 00:09:35.014 }, 00:09:35.015 { 00:09:35.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.015 "dma_device_type": 2 00:09:35.015 } 00:09:35.015 ], 00:09:35.015 "driver_specific": { 00:09:35.015 "raid": { 00:09:35.015 "uuid": "97e4098c-6072-47fe-b99b-13cf94b04833", 00:09:35.015 "strip_size_kb": 64, 00:09:35.015 "state": "online", 00:09:35.015 "raid_level": "concat", 00:09:35.015 "superblock": true, 00:09:35.015 "num_base_bdevs": 3, 00:09:35.015 "num_base_bdevs_discovered": 3, 00:09:35.015 "num_base_bdevs_operational": 3, 00:09:35.015 "base_bdevs_list": [ 00:09:35.015 { 00:09:35.015 "name": "BaseBdev1", 00:09:35.015 "uuid": "027d7f3d-ce0e-4dfd-9ab8-fb95c5ad4e02", 00:09:35.015 "is_configured": true, 00:09:35.015 "data_offset": 2048, 00:09:35.015 "data_size": 63488 00:09:35.015 }, 00:09:35.015 { 00:09:35.015 "name": "BaseBdev2", 00:09:35.015 "uuid": "76e1434e-a16e-4fe6-af0c-7b99c9cb2502", 00:09:35.015 "is_configured": true, 00:09:35.015 "data_offset": 2048, 00:09:35.015 "data_size": 63488 00:09:35.015 }, 00:09:35.015 { 00:09:35.015 "name": "BaseBdev3", 00:09:35.015 "uuid": "dcb944e7-0706-4503-b33b-874ac27a867b", 00:09:35.015 "is_configured": true, 00:09:35.015 "data_offset": 2048, 00:09:35.015 "data_size": 63488 00:09:35.015 } 00:09:35.015 ] 00:09:35.015 } 00:09:35.015 } 00:09:35.015 }' 00:09:35.015 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:35.273 BaseBdev2 00:09:35.273 BaseBdev3' 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.273 [2024-10-13 02:23:53.914289] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.273 [2024-10-13 02:23:53.914321] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.273 [2024-10-13 02:23:53.914378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.273 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.274 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.274 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.274 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.532 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.532 "name": "Existed_Raid", 00:09:35.532 "uuid": "97e4098c-6072-47fe-b99b-13cf94b04833", 00:09:35.532 "strip_size_kb": 64, 00:09:35.532 "state": "offline", 00:09:35.532 "raid_level": "concat", 00:09:35.532 "superblock": true, 00:09:35.532 "num_base_bdevs": 3, 00:09:35.532 "num_base_bdevs_discovered": 2, 00:09:35.532 "num_base_bdevs_operational": 2, 00:09:35.532 "base_bdevs_list": [ 00:09:35.532 { 00:09:35.532 "name": null, 00:09:35.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.532 "is_configured": false, 00:09:35.532 "data_offset": 0, 00:09:35.532 "data_size": 63488 00:09:35.532 }, 00:09:35.532 { 00:09:35.532 "name": "BaseBdev2", 00:09:35.532 "uuid": "76e1434e-a16e-4fe6-af0c-7b99c9cb2502", 00:09:35.532 "is_configured": true, 00:09:35.532 "data_offset": 2048, 00:09:35.532 "data_size": 63488 00:09:35.532 }, 00:09:35.532 { 00:09:35.532 "name": "BaseBdev3", 00:09:35.532 "uuid": "dcb944e7-0706-4503-b33b-874ac27a867b", 00:09:35.532 "is_configured": true, 00:09:35.532 "data_offset": 2048, 00:09:35.532 "data_size": 63488 00:09:35.532 } 00:09:35.532 ] 00:09:35.532 }' 00:09:35.532 02:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.532 02:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.789 [2024-10-13 02:23:54.404736] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.789 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.048 [2024-10-13 02:23:54.471732] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:36.048 [2024-10-13 02:23:54.471797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.048 BaseBdev2 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.048 [ 00:09:36.048 { 00:09:36.048 "name": "BaseBdev2", 00:09:36.048 "aliases": [ 00:09:36.048 "f0acb363-a1bf-4692-b84b-48b6964bc0dd" 00:09:36.048 ], 00:09:36.048 "product_name": "Malloc disk", 00:09:36.048 "block_size": 512, 00:09:36.048 "num_blocks": 65536, 00:09:36.048 "uuid": "f0acb363-a1bf-4692-b84b-48b6964bc0dd", 00:09:36.048 "assigned_rate_limits": { 00:09:36.048 "rw_ios_per_sec": 0, 00:09:36.048 "rw_mbytes_per_sec": 0, 00:09:36.048 "r_mbytes_per_sec": 0, 00:09:36.048 "w_mbytes_per_sec": 0 00:09:36.048 }, 00:09:36.048 "claimed": false, 00:09:36.048 "zoned": false, 00:09:36.048 "supported_io_types": { 00:09:36.048 "read": true, 00:09:36.048 "write": true, 00:09:36.048 "unmap": true, 00:09:36.048 "flush": true, 00:09:36.048 "reset": true, 00:09:36.048 "nvme_admin": false, 00:09:36.048 "nvme_io": false, 00:09:36.048 "nvme_io_md": false, 00:09:36.048 "write_zeroes": true, 00:09:36.048 "zcopy": true, 00:09:36.048 "get_zone_info": false, 00:09:36.048 "zone_management": false, 00:09:36.048 "zone_append": false, 00:09:36.048 "compare": false, 00:09:36.048 "compare_and_write": false, 00:09:36.048 "abort": true, 00:09:36.048 "seek_hole": false, 00:09:36.048 "seek_data": false, 00:09:36.048 "copy": true, 00:09:36.048 "nvme_iov_md": false 00:09:36.048 }, 00:09:36.048 "memory_domains": [ 00:09:36.048 { 00:09:36.048 "dma_device_id": "system", 00:09:36.048 "dma_device_type": 1 00:09:36.048 }, 00:09:36.048 { 00:09:36.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.048 "dma_device_type": 2 00:09:36.048 } 00:09:36.048 ], 00:09:36.048 "driver_specific": {} 00:09:36.048 } 00:09:36.048 ] 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.048 BaseBdev3 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.048 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.048 [ 00:09:36.048 { 00:09:36.049 "name": "BaseBdev3", 00:09:36.049 "aliases": [ 00:09:36.049 "9eba9091-c87d-486a-b24c-4e1f42931afe" 00:09:36.049 ], 00:09:36.049 "product_name": "Malloc disk", 00:09:36.049 "block_size": 512, 00:09:36.049 "num_blocks": 65536, 00:09:36.049 "uuid": "9eba9091-c87d-486a-b24c-4e1f42931afe", 00:09:36.049 "assigned_rate_limits": { 00:09:36.049 "rw_ios_per_sec": 0, 00:09:36.049 "rw_mbytes_per_sec": 0, 00:09:36.049 "r_mbytes_per_sec": 0, 00:09:36.049 "w_mbytes_per_sec": 0 00:09:36.049 }, 00:09:36.049 "claimed": false, 00:09:36.049 "zoned": false, 00:09:36.049 "supported_io_types": { 00:09:36.049 "read": true, 00:09:36.049 "write": true, 00:09:36.049 "unmap": true, 00:09:36.049 "flush": true, 00:09:36.049 "reset": true, 00:09:36.049 "nvme_admin": false, 00:09:36.049 "nvme_io": false, 00:09:36.049 "nvme_io_md": false, 00:09:36.049 "write_zeroes": true, 00:09:36.049 "zcopy": true, 00:09:36.049 "get_zone_info": false, 00:09:36.049 "zone_management": false, 00:09:36.049 "zone_append": false, 00:09:36.049 "compare": false, 00:09:36.049 "compare_and_write": false, 00:09:36.049 "abort": true, 00:09:36.049 "seek_hole": false, 00:09:36.049 "seek_data": false, 00:09:36.049 "copy": true, 00:09:36.049 "nvme_iov_md": false 00:09:36.049 }, 00:09:36.049 "memory_domains": [ 00:09:36.049 { 00:09:36.049 "dma_device_id": "system", 00:09:36.049 "dma_device_type": 1 00:09:36.049 }, 00:09:36.049 { 00:09:36.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.049 "dma_device_type": 2 00:09:36.049 } 00:09:36.049 ], 00:09:36.049 "driver_specific": {} 00:09:36.049 } 00:09:36.049 ] 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.049 [2024-10-13 02:23:54.629896] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.049 [2024-10-13 02:23:54.629947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.049 [2024-10-13 02:23:54.629967] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.049 [2024-10-13 02:23:54.631747] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.049 "name": "Existed_Raid", 00:09:36.049 "uuid": "8a6ebad4-033a-4cbb-bb7b-1264c4d2eb4b", 00:09:36.049 "strip_size_kb": 64, 00:09:36.049 "state": "configuring", 00:09:36.049 "raid_level": "concat", 00:09:36.049 "superblock": true, 00:09:36.049 "num_base_bdevs": 3, 00:09:36.049 "num_base_bdevs_discovered": 2, 00:09:36.049 "num_base_bdevs_operational": 3, 00:09:36.049 "base_bdevs_list": [ 00:09:36.049 { 00:09:36.049 "name": "BaseBdev1", 00:09:36.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.049 "is_configured": false, 00:09:36.049 "data_offset": 0, 00:09:36.049 "data_size": 0 00:09:36.049 }, 00:09:36.049 { 00:09:36.049 "name": "BaseBdev2", 00:09:36.049 "uuid": "f0acb363-a1bf-4692-b84b-48b6964bc0dd", 00:09:36.049 "is_configured": true, 00:09:36.049 "data_offset": 2048, 00:09:36.049 "data_size": 63488 00:09:36.049 }, 00:09:36.049 { 00:09:36.049 "name": "BaseBdev3", 00:09:36.049 "uuid": "9eba9091-c87d-486a-b24c-4e1f42931afe", 00:09:36.049 "is_configured": true, 00:09:36.049 "data_offset": 2048, 00:09:36.049 "data_size": 63488 00:09:36.049 } 00:09:36.049 ] 00:09:36.049 }' 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.049 02:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.616 [2024-10-13 02:23:55.089126] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.616 "name": "Existed_Raid", 00:09:36.616 "uuid": "8a6ebad4-033a-4cbb-bb7b-1264c4d2eb4b", 00:09:36.616 "strip_size_kb": 64, 00:09:36.616 "state": "configuring", 00:09:36.616 "raid_level": "concat", 00:09:36.616 "superblock": true, 00:09:36.616 "num_base_bdevs": 3, 00:09:36.616 "num_base_bdevs_discovered": 1, 00:09:36.616 "num_base_bdevs_operational": 3, 00:09:36.616 "base_bdevs_list": [ 00:09:36.616 { 00:09:36.616 "name": "BaseBdev1", 00:09:36.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.616 "is_configured": false, 00:09:36.616 "data_offset": 0, 00:09:36.616 "data_size": 0 00:09:36.616 }, 00:09:36.616 { 00:09:36.616 "name": null, 00:09:36.616 "uuid": "f0acb363-a1bf-4692-b84b-48b6964bc0dd", 00:09:36.616 "is_configured": false, 00:09:36.616 "data_offset": 0, 00:09:36.616 "data_size": 63488 00:09:36.616 }, 00:09:36.616 { 00:09:36.616 "name": "BaseBdev3", 00:09:36.616 "uuid": "9eba9091-c87d-486a-b24c-4e1f42931afe", 00:09:36.616 "is_configured": true, 00:09:36.616 "data_offset": 2048, 00:09:36.616 "data_size": 63488 00:09:36.616 } 00:09:36.616 ] 00:09:36.616 }' 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.616 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.874 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.874 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.874 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:36.874 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.874 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.132 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:37.132 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:37.132 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.132 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.132 [2024-10-13 02:23:55.591190] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.132 BaseBdev1 00:09:37.132 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.132 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:37.132 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:37.132 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:37.132 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:37.132 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:37.132 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:37.132 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:37.132 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.132 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.132 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.132 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.132 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.132 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.133 [ 00:09:37.133 { 00:09:37.133 "name": "BaseBdev1", 00:09:37.133 "aliases": [ 00:09:37.133 "20509aa0-a503-485a-9ec3-df67a9260598" 00:09:37.133 ], 00:09:37.133 "product_name": "Malloc disk", 00:09:37.133 "block_size": 512, 00:09:37.133 "num_blocks": 65536, 00:09:37.133 "uuid": "20509aa0-a503-485a-9ec3-df67a9260598", 00:09:37.133 "assigned_rate_limits": { 00:09:37.133 "rw_ios_per_sec": 0, 00:09:37.133 "rw_mbytes_per_sec": 0, 00:09:37.133 "r_mbytes_per_sec": 0, 00:09:37.133 "w_mbytes_per_sec": 0 00:09:37.133 }, 00:09:37.133 "claimed": true, 00:09:37.133 "claim_type": "exclusive_write", 00:09:37.133 "zoned": false, 00:09:37.133 "supported_io_types": { 00:09:37.133 "read": true, 00:09:37.133 "write": true, 00:09:37.133 "unmap": true, 00:09:37.133 "flush": true, 00:09:37.133 "reset": true, 00:09:37.133 "nvme_admin": false, 00:09:37.133 "nvme_io": false, 00:09:37.133 "nvme_io_md": false, 00:09:37.133 "write_zeroes": true, 00:09:37.133 "zcopy": true, 00:09:37.133 "get_zone_info": false, 00:09:37.133 "zone_management": false, 00:09:37.133 "zone_append": false, 00:09:37.133 "compare": false, 00:09:37.133 "compare_and_write": false, 00:09:37.133 "abort": true, 00:09:37.133 "seek_hole": false, 00:09:37.133 "seek_data": false, 00:09:37.133 "copy": true, 00:09:37.133 "nvme_iov_md": false 00:09:37.133 }, 00:09:37.133 "memory_domains": [ 00:09:37.133 { 00:09:37.133 "dma_device_id": "system", 00:09:37.133 "dma_device_type": 1 00:09:37.133 }, 00:09:37.133 { 00:09:37.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.133 "dma_device_type": 2 00:09:37.133 } 00:09:37.133 ], 00:09:37.133 "driver_specific": {} 00:09:37.133 } 00:09:37.133 ] 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.133 "name": "Existed_Raid", 00:09:37.133 "uuid": "8a6ebad4-033a-4cbb-bb7b-1264c4d2eb4b", 00:09:37.133 "strip_size_kb": 64, 00:09:37.133 "state": "configuring", 00:09:37.133 "raid_level": "concat", 00:09:37.133 "superblock": true, 00:09:37.133 "num_base_bdevs": 3, 00:09:37.133 "num_base_bdevs_discovered": 2, 00:09:37.133 "num_base_bdevs_operational": 3, 00:09:37.133 "base_bdevs_list": [ 00:09:37.133 { 00:09:37.133 "name": "BaseBdev1", 00:09:37.133 "uuid": "20509aa0-a503-485a-9ec3-df67a9260598", 00:09:37.133 "is_configured": true, 00:09:37.133 "data_offset": 2048, 00:09:37.133 "data_size": 63488 00:09:37.133 }, 00:09:37.133 { 00:09:37.133 "name": null, 00:09:37.133 "uuid": "f0acb363-a1bf-4692-b84b-48b6964bc0dd", 00:09:37.133 "is_configured": false, 00:09:37.133 "data_offset": 0, 00:09:37.133 "data_size": 63488 00:09:37.133 }, 00:09:37.133 { 00:09:37.133 "name": "BaseBdev3", 00:09:37.133 "uuid": "9eba9091-c87d-486a-b24c-4e1f42931afe", 00:09:37.133 "is_configured": true, 00:09:37.133 "data_offset": 2048, 00:09:37.133 "data_size": 63488 00:09:37.133 } 00:09:37.133 ] 00:09:37.133 }' 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.133 02:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.390 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.390 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.390 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.648 [2024-10-13 02:23:56.122437] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.648 "name": "Existed_Raid", 00:09:37.648 "uuid": "8a6ebad4-033a-4cbb-bb7b-1264c4d2eb4b", 00:09:37.648 "strip_size_kb": 64, 00:09:37.648 "state": "configuring", 00:09:37.648 "raid_level": "concat", 00:09:37.648 "superblock": true, 00:09:37.648 "num_base_bdevs": 3, 00:09:37.648 "num_base_bdevs_discovered": 1, 00:09:37.648 "num_base_bdevs_operational": 3, 00:09:37.648 "base_bdevs_list": [ 00:09:37.648 { 00:09:37.648 "name": "BaseBdev1", 00:09:37.648 "uuid": "20509aa0-a503-485a-9ec3-df67a9260598", 00:09:37.648 "is_configured": true, 00:09:37.648 "data_offset": 2048, 00:09:37.648 "data_size": 63488 00:09:37.648 }, 00:09:37.648 { 00:09:37.648 "name": null, 00:09:37.648 "uuid": "f0acb363-a1bf-4692-b84b-48b6964bc0dd", 00:09:37.648 "is_configured": false, 00:09:37.648 "data_offset": 0, 00:09:37.648 "data_size": 63488 00:09:37.648 }, 00:09:37.648 { 00:09:37.648 "name": null, 00:09:37.648 "uuid": "9eba9091-c87d-486a-b24c-4e1f42931afe", 00:09:37.648 "is_configured": false, 00:09:37.648 "data_offset": 0, 00:09:37.648 "data_size": 63488 00:09:37.648 } 00:09:37.648 ] 00:09:37.648 }' 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.648 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.906 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.906 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.906 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.906 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:37.906 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.166 [2024-10-13 02:23:56.597675] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.166 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.166 "name": "Existed_Raid", 00:09:38.166 "uuid": "8a6ebad4-033a-4cbb-bb7b-1264c4d2eb4b", 00:09:38.166 "strip_size_kb": 64, 00:09:38.166 "state": "configuring", 00:09:38.166 "raid_level": "concat", 00:09:38.166 "superblock": true, 00:09:38.166 "num_base_bdevs": 3, 00:09:38.166 "num_base_bdevs_discovered": 2, 00:09:38.166 "num_base_bdevs_operational": 3, 00:09:38.166 "base_bdevs_list": [ 00:09:38.166 { 00:09:38.166 "name": "BaseBdev1", 00:09:38.166 "uuid": "20509aa0-a503-485a-9ec3-df67a9260598", 00:09:38.166 "is_configured": true, 00:09:38.166 "data_offset": 2048, 00:09:38.166 "data_size": 63488 00:09:38.166 }, 00:09:38.166 { 00:09:38.166 "name": null, 00:09:38.166 "uuid": "f0acb363-a1bf-4692-b84b-48b6964bc0dd", 00:09:38.166 "is_configured": false, 00:09:38.166 "data_offset": 0, 00:09:38.166 "data_size": 63488 00:09:38.167 }, 00:09:38.167 { 00:09:38.167 "name": "BaseBdev3", 00:09:38.167 "uuid": "9eba9091-c87d-486a-b24c-4e1f42931afe", 00:09:38.167 "is_configured": true, 00:09:38.167 "data_offset": 2048, 00:09:38.167 "data_size": 63488 00:09:38.167 } 00:09:38.167 ] 00:09:38.167 }' 00:09:38.167 02:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.167 02:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.426 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:38.426 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.426 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.426 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.426 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.426 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:38.426 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:38.426 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.426 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.426 [2024-10-13 02:23:57.060956] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:38.426 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.426 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.426 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.426 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.426 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.426 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.426 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.426 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.427 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.427 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.427 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.427 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.427 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.427 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.427 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.427 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.686 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.686 "name": "Existed_Raid", 00:09:38.686 "uuid": "8a6ebad4-033a-4cbb-bb7b-1264c4d2eb4b", 00:09:38.686 "strip_size_kb": 64, 00:09:38.686 "state": "configuring", 00:09:38.686 "raid_level": "concat", 00:09:38.686 "superblock": true, 00:09:38.686 "num_base_bdevs": 3, 00:09:38.686 "num_base_bdevs_discovered": 1, 00:09:38.686 "num_base_bdevs_operational": 3, 00:09:38.686 "base_bdevs_list": [ 00:09:38.686 { 00:09:38.686 "name": null, 00:09:38.686 "uuid": "20509aa0-a503-485a-9ec3-df67a9260598", 00:09:38.686 "is_configured": false, 00:09:38.686 "data_offset": 0, 00:09:38.686 "data_size": 63488 00:09:38.686 }, 00:09:38.686 { 00:09:38.686 "name": null, 00:09:38.686 "uuid": "f0acb363-a1bf-4692-b84b-48b6964bc0dd", 00:09:38.686 "is_configured": false, 00:09:38.686 "data_offset": 0, 00:09:38.686 "data_size": 63488 00:09:38.686 }, 00:09:38.686 { 00:09:38.686 "name": "BaseBdev3", 00:09:38.686 "uuid": "9eba9091-c87d-486a-b24c-4e1f42931afe", 00:09:38.686 "is_configured": true, 00:09:38.686 "data_offset": 2048, 00:09:38.686 "data_size": 63488 00:09:38.686 } 00:09:38.686 ] 00:09:38.686 }' 00:09:38.686 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.686 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.946 [2024-10-13 02:23:57.578783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.946 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.205 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.205 "name": "Existed_Raid", 00:09:39.205 "uuid": "8a6ebad4-033a-4cbb-bb7b-1264c4d2eb4b", 00:09:39.205 "strip_size_kb": 64, 00:09:39.205 "state": "configuring", 00:09:39.205 "raid_level": "concat", 00:09:39.205 "superblock": true, 00:09:39.205 "num_base_bdevs": 3, 00:09:39.205 "num_base_bdevs_discovered": 2, 00:09:39.205 "num_base_bdevs_operational": 3, 00:09:39.205 "base_bdevs_list": [ 00:09:39.205 { 00:09:39.205 "name": null, 00:09:39.206 "uuid": "20509aa0-a503-485a-9ec3-df67a9260598", 00:09:39.206 "is_configured": false, 00:09:39.206 "data_offset": 0, 00:09:39.206 "data_size": 63488 00:09:39.206 }, 00:09:39.206 { 00:09:39.206 "name": "BaseBdev2", 00:09:39.206 "uuid": "f0acb363-a1bf-4692-b84b-48b6964bc0dd", 00:09:39.206 "is_configured": true, 00:09:39.206 "data_offset": 2048, 00:09:39.206 "data_size": 63488 00:09:39.206 }, 00:09:39.206 { 00:09:39.206 "name": "BaseBdev3", 00:09:39.206 "uuid": "9eba9091-c87d-486a-b24c-4e1f42931afe", 00:09:39.206 "is_configured": true, 00:09:39.206 "data_offset": 2048, 00:09:39.206 "data_size": 63488 00:09:39.206 } 00:09:39.206 ] 00:09:39.206 }' 00:09:39.206 02:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.206 02:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 20509aa0-a503-485a-9ec3-df67a9260598 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.465 [2024-10-13 02:23:58.124711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:39.465 [2024-10-13 02:23:58.124893] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:39.465 [2024-10-13 02:23:58.124910] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:39.465 [2024-10-13 02:23:58.125140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:39.465 [2024-10-13 02:23:58.125260] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:39.465 [2024-10-13 02:23:58.125269] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:39.465 [2024-10-13 02:23:58.125372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.465 NewBaseBdev 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.465 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.728 [ 00:09:39.728 { 00:09:39.728 "name": "NewBaseBdev", 00:09:39.728 "aliases": [ 00:09:39.728 "20509aa0-a503-485a-9ec3-df67a9260598" 00:09:39.728 ], 00:09:39.728 "product_name": "Malloc disk", 00:09:39.728 "block_size": 512, 00:09:39.728 "num_blocks": 65536, 00:09:39.728 "uuid": "20509aa0-a503-485a-9ec3-df67a9260598", 00:09:39.728 "assigned_rate_limits": { 00:09:39.728 "rw_ios_per_sec": 0, 00:09:39.728 "rw_mbytes_per_sec": 0, 00:09:39.728 "r_mbytes_per_sec": 0, 00:09:39.728 "w_mbytes_per_sec": 0 00:09:39.728 }, 00:09:39.728 "claimed": true, 00:09:39.728 "claim_type": "exclusive_write", 00:09:39.728 "zoned": false, 00:09:39.728 "supported_io_types": { 00:09:39.728 "read": true, 00:09:39.728 "write": true, 00:09:39.728 "unmap": true, 00:09:39.728 "flush": true, 00:09:39.728 "reset": true, 00:09:39.728 "nvme_admin": false, 00:09:39.728 "nvme_io": false, 00:09:39.728 "nvme_io_md": false, 00:09:39.728 "write_zeroes": true, 00:09:39.728 "zcopy": true, 00:09:39.728 "get_zone_info": false, 00:09:39.728 "zone_management": false, 00:09:39.728 "zone_append": false, 00:09:39.728 "compare": false, 00:09:39.728 "compare_and_write": false, 00:09:39.728 "abort": true, 00:09:39.728 "seek_hole": false, 00:09:39.728 "seek_data": false, 00:09:39.728 "copy": true, 00:09:39.728 "nvme_iov_md": false 00:09:39.728 }, 00:09:39.728 "memory_domains": [ 00:09:39.728 { 00:09:39.728 "dma_device_id": "system", 00:09:39.728 "dma_device_type": 1 00:09:39.728 }, 00:09:39.728 { 00:09:39.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.728 "dma_device_type": 2 00:09:39.728 } 00:09:39.728 ], 00:09:39.728 "driver_specific": {} 00:09:39.728 } 00:09:39.728 ] 00:09:39.728 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.729 "name": "Existed_Raid", 00:09:39.729 "uuid": "8a6ebad4-033a-4cbb-bb7b-1264c4d2eb4b", 00:09:39.729 "strip_size_kb": 64, 00:09:39.729 "state": "online", 00:09:39.729 "raid_level": "concat", 00:09:39.729 "superblock": true, 00:09:39.729 "num_base_bdevs": 3, 00:09:39.729 "num_base_bdevs_discovered": 3, 00:09:39.729 "num_base_bdevs_operational": 3, 00:09:39.729 "base_bdevs_list": [ 00:09:39.729 { 00:09:39.729 "name": "NewBaseBdev", 00:09:39.729 "uuid": "20509aa0-a503-485a-9ec3-df67a9260598", 00:09:39.729 "is_configured": true, 00:09:39.729 "data_offset": 2048, 00:09:39.729 "data_size": 63488 00:09:39.729 }, 00:09:39.729 { 00:09:39.729 "name": "BaseBdev2", 00:09:39.729 "uuid": "f0acb363-a1bf-4692-b84b-48b6964bc0dd", 00:09:39.729 "is_configured": true, 00:09:39.729 "data_offset": 2048, 00:09:39.729 "data_size": 63488 00:09:39.729 }, 00:09:39.729 { 00:09:39.729 "name": "BaseBdev3", 00:09:39.729 "uuid": "9eba9091-c87d-486a-b24c-4e1f42931afe", 00:09:39.729 "is_configured": true, 00:09:39.729 "data_offset": 2048, 00:09:39.729 "data_size": 63488 00:09:39.729 } 00:09:39.729 ] 00:09:39.729 }' 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.729 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.989 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:39.989 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:39.989 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.989 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.989 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.989 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.989 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:39.989 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.989 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.989 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.989 [2024-10-13 02:23:58.596301] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.989 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.989 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.989 "name": "Existed_Raid", 00:09:39.989 "aliases": [ 00:09:39.989 "8a6ebad4-033a-4cbb-bb7b-1264c4d2eb4b" 00:09:39.989 ], 00:09:39.989 "product_name": "Raid Volume", 00:09:39.989 "block_size": 512, 00:09:39.989 "num_blocks": 190464, 00:09:39.989 "uuid": "8a6ebad4-033a-4cbb-bb7b-1264c4d2eb4b", 00:09:39.989 "assigned_rate_limits": { 00:09:39.989 "rw_ios_per_sec": 0, 00:09:39.989 "rw_mbytes_per_sec": 0, 00:09:39.989 "r_mbytes_per_sec": 0, 00:09:39.989 "w_mbytes_per_sec": 0 00:09:39.989 }, 00:09:39.989 "claimed": false, 00:09:39.989 "zoned": false, 00:09:39.989 "supported_io_types": { 00:09:39.989 "read": true, 00:09:39.989 "write": true, 00:09:39.989 "unmap": true, 00:09:39.989 "flush": true, 00:09:39.989 "reset": true, 00:09:39.989 "nvme_admin": false, 00:09:39.989 "nvme_io": false, 00:09:39.989 "nvme_io_md": false, 00:09:39.989 "write_zeroes": true, 00:09:39.989 "zcopy": false, 00:09:39.989 "get_zone_info": false, 00:09:39.989 "zone_management": false, 00:09:39.989 "zone_append": false, 00:09:39.989 "compare": false, 00:09:39.989 "compare_and_write": false, 00:09:39.989 "abort": false, 00:09:39.989 "seek_hole": false, 00:09:39.989 "seek_data": false, 00:09:39.989 "copy": false, 00:09:39.989 "nvme_iov_md": false 00:09:39.989 }, 00:09:39.989 "memory_domains": [ 00:09:39.989 { 00:09:39.989 "dma_device_id": "system", 00:09:39.989 "dma_device_type": 1 00:09:39.989 }, 00:09:39.989 { 00:09:39.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.989 "dma_device_type": 2 00:09:39.989 }, 00:09:39.989 { 00:09:39.989 "dma_device_id": "system", 00:09:39.989 "dma_device_type": 1 00:09:39.989 }, 00:09:39.989 { 00:09:39.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.989 "dma_device_type": 2 00:09:39.989 }, 00:09:39.989 { 00:09:39.989 "dma_device_id": "system", 00:09:39.989 "dma_device_type": 1 00:09:39.989 }, 00:09:39.989 { 00:09:39.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.989 "dma_device_type": 2 00:09:39.989 } 00:09:39.989 ], 00:09:39.989 "driver_specific": { 00:09:39.989 "raid": { 00:09:39.989 "uuid": "8a6ebad4-033a-4cbb-bb7b-1264c4d2eb4b", 00:09:39.989 "strip_size_kb": 64, 00:09:39.989 "state": "online", 00:09:39.989 "raid_level": "concat", 00:09:39.989 "superblock": true, 00:09:39.989 "num_base_bdevs": 3, 00:09:39.989 "num_base_bdevs_discovered": 3, 00:09:39.989 "num_base_bdevs_operational": 3, 00:09:39.989 "base_bdevs_list": [ 00:09:39.989 { 00:09:39.989 "name": "NewBaseBdev", 00:09:39.989 "uuid": "20509aa0-a503-485a-9ec3-df67a9260598", 00:09:39.989 "is_configured": true, 00:09:39.989 "data_offset": 2048, 00:09:39.989 "data_size": 63488 00:09:39.989 }, 00:09:39.989 { 00:09:39.989 "name": "BaseBdev2", 00:09:39.989 "uuid": "f0acb363-a1bf-4692-b84b-48b6964bc0dd", 00:09:39.989 "is_configured": true, 00:09:39.989 "data_offset": 2048, 00:09:39.989 "data_size": 63488 00:09:39.989 }, 00:09:39.989 { 00:09:39.989 "name": "BaseBdev3", 00:09:39.989 "uuid": "9eba9091-c87d-486a-b24c-4e1f42931afe", 00:09:39.989 "is_configured": true, 00:09:39.989 "data_offset": 2048, 00:09:39.989 "data_size": 63488 00:09:39.989 } 00:09:39.989 ] 00:09:39.989 } 00:09:39.989 } 00:09:39.989 }' 00:09:39.989 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.249 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:40.249 BaseBdev2 00:09:40.249 BaseBdev3' 00:09:40.249 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.249 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.249 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.249 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:40.249 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.249 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.249 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.249 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.249 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.249 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.249 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.250 [2024-10-13 02:23:58.895401] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.250 [2024-10-13 02:23:58.895442] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.250 [2024-10-13 02:23:58.895514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.250 [2024-10-13 02:23:58.895568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.250 [2024-10-13 02:23:58.895580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77232 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77232 ']' 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77232 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:40.250 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77232 00:09:40.510 killing process with pid 77232 00:09:40.510 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:40.510 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:40.510 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77232' 00:09:40.510 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77232 00:09:40.510 [2024-10-13 02:23:58.944561] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.510 02:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77232 00:09:40.510 [2024-10-13 02:23:58.974781] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:40.769 02:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:40.769 00:09:40.769 real 0m8.954s 00:09:40.769 user 0m15.229s 00:09:40.769 sys 0m1.846s 00:09:40.769 02:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.769 02:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.769 ************************************ 00:09:40.769 END TEST raid_state_function_test_sb 00:09:40.769 ************************************ 00:09:40.769 02:23:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:40.769 02:23:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:40.769 02:23:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.769 02:23:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:40.769 ************************************ 00:09:40.769 START TEST raid_superblock_test 00:09:40.769 ************************************ 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77841 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77841 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 77841 ']' 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.769 02:23:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.769 [2024-10-13 02:23:59.390796] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:40.769 [2024-10-13 02:23:59.390955] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77841 ] 00:09:41.029 [2024-10-13 02:23:59.517586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.029 [2024-10-13 02:23:59.563164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.029 [2024-10-13 02:23:59.605626] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.029 [2024-10-13 02:23:59.605668] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.657 malloc1 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.657 [2024-10-13 02:24:00.247761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:41.657 [2024-10-13 02:24:00.247834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.657 [2024-10-13 02:24:00.247854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:41.657 [2024-10-13 02:24:00.247879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.657 [2024-10-13 02:24:00.249891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.657 [2024-10-13 02:24:00.249934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:41.657 pt1 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.657 malloc2 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.657 [2024-10-13 02:24:00.290807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:41.657 [2024-10-13 02:24:00.290892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.657 [2024-10-13 02:24:00.290913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:41.657 [2024-10-13 02:24:00.290928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.657 [2024-10-13 02:24:00.293540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.657 [2024-10-13 02:24:00.293585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:41.657 pt2 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.657 malloc3 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.657 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.657 [2024-10-13 02:24:00.319138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:41.657 [2024-10-13 02:24:00.319197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.657 [2024-10-13 02:24:00.319214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:41.657 [2024-10-13 02:24:00.319224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.657 [2024-10-13 02:24:00.321173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.657 [2024-10-13 02:24:00.321209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:41.917 pt3 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.917 [2024-10-13 02:24:00.331209] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:41.917 [2024-10-13 02:24:00.332970] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.917 [2024-10-13 02:24:00.333026] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:41.917 [2024-10-13 02:24:00.333167] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:41.917 [2024-10-13 02:24:00.333179] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:41.917 [2024-10-13 02:24:00.333432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:41.917 [2024-10-13 02:24:00.333567] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:41.917 [2024-10-13 02:24:00.333587] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:41.917 [2024-10-13 02:24:00.333702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.917 "name": "raid_bdev1", 00:09:41.917 "uuid": "4bf50fd7-9cc6-42c8-9392-bfa564483587", 00:09:41.917 "strip_size_kb": 64, 00:09:41.917 "state": "online", 00:09:41.917 "raid_level": "concat", 00:09:41.917 "superblock": true, 00:09:41.917 "num_base_bdevs": 3, 00:09:41.917 "num_base_bdevs_discovered": 3, 00:09:41.917 "num_base_bdevs_operational": 3, 00:09:41.917 "base_bdevs_list": [ 00:09:41.917 { 00:09:41.917 "name": "pt1", 00:09:41.917 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.917 "is_configured": true, 00:09:41.917 "data_offset": 2048, 00:09:41.917 "data_size": 63488 00:09:41.917 }, 00:09:41.917 { 00:09:41.917 "name": "pt2", 00:09:41.917 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.917 "is_configured": true, 00:09:41.917 "data_offset": 2048, 00:09:41.917 "data_size": 63488 00:09:41.917 }, 00:09:41.917 { 00:09:41.917 "name": "pt3", 00:09:41.917 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.917 "is_configured": true, 00:09:41.917 "data_offset": 2048, 00:09:41.917 "data_size": 63488 00:09:41.917 } 00:09:41.917 ] 00:09:41.917 }' 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.917 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.177 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:42.178 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:42.178 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.178 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.178 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.178 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.178 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.178 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.178 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.178 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.178 [2024-10-13 02:24:00.758761] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.178 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.178 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:42.178 "name": "raid_bdev1", 00:09:42.178 "aliases": [ 00:09:42.178 "4bf50fd7-9cc6-42c8-9392-bfa564483587" 00:09:42.178 ], 00:09:42.178 "product_name": "Raid Volume", 00:09:42.178 "block_size": 512, 00:09:42.178 "num_blocks": 190464, 00:09:42.178 "uuid": "4bf50fd7-9cc6-42c8-9392-bfa564483587", 00:09:42.178 "assigned_rate_limits": { 00:09:42.178 "rw_ios_per_sec": 0, 00:09:42.178 "rw_mbytes_per_sec": 0, 00:09:42.178 "r_mbytes_per_sec": 0, 00:09:42.178 "w_mbytes_per_sec": 0 00:09:42.178 }, 00:09:42.178 "claimed": false, 00:09:42.178 "zoned": false, 00:09:42.178 "supported_io_types": { 00:09:42.178 "read": true, 00:09:42.178 "write": true, 00:09:42.178 "unmap": true, 00:09:42.178 "flush": true, 00:09:42.178 "reset": true, 00:09:42.178 "nvme_admin": false, 00:09:42.178 "nvme_io": false, 00:09:42.178 "nvme_io_md": false, 00:09:42.178 "write_zeroes": true, 00:09:42.178 "zcopy": false, 00:09:42.178 "get_zone_info": false, 00:09:42.178 "zone_management": false, 00:09:42.178 "zone_append": false, 00:09:42.178 "compare": false, 00:09:42.178 "compare_and_write": false, 00:09:42.178 "abort": false, 00:09:42.178 "seek_hole": false, 00:09:42.178 "seek_data": false, 00:09:42.178 "copy": false, 00:09:42.178 "nvme_iov_md": false 00:09:42.178 }, 00:09:42.178 "memory_domains": [ 00:09:42.178 { 00:09:42.178 "dma_device_id": "system", 00:09:42.178 "dma_device_type": 1 00:09:42.178 }, 00:09:42.178 { 00:09:42.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.178 "dma_device_type": 2 00:09:42.178 }, 00:09:42.178 { 00:09:42.178 "dma_device_id": "system", 00:09:42.178 "dma_device_type": 1 00:09:42.178 }, 00:09:42.178 { 00:09:42.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.178 "dma_device_type": 2 00:09:42.178 }, 00:09:42.178 { 00:09:42.178 "dma_device_id": "system", 00:09:42.178 "dma_device_type": 1 00:09:42.178 }, 00:09:42.178 { 00:09:42.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.178 "dma_device_type": 2 00:09:42.178 } 00:09:42.178 ], 00:09:42.178 "driver_specific": { 00:09:42.178 "raid": { 00:09:42.178 "uuid": "4bf50fd7-9cc6-42c8-9392-bfa564483587", 00:09:42.178 "strip_size_kb": 64, 00:09:42.178 "state": "online", 00:09:42.178 "raid_level": "concat", 00:09:42.178 "superblock": true, 00:09:42.178 "num_base_bdevs": 3, 00:09:42.178 "num_base_bdevs_discovered": 3, 00:09:42.178 "num_base_bdevs_operational": 3, 00:09:42.178 "base_bdevs_list": [ 00:09:42.178 { 00:09:42.178 "name": "pt1", 00:09:42.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.178 "is_configured": true, 00:09:42.178 "data_offset": 2048, 00:09:42.178 "data_size": 63488 00:09:42.178 }, 00:09:42.178 { 00:09:42.178 "name": "pt2", 00:09:42.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.178 "is_configured": true, 00:09:42.178 "data_offset": 2048, 00:09:42.178 "data_size": 63488 00:09:42.178 }, 00:09:42.178 { 00:09:42.178 "name": "pt3", 00:09:42.178 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.178 "is_configured": true, 00:09:42.178 "data_offset": 2048, 00:09:42.178 "data_size": 63488 00:09:42.178 } 00:09:42.178 ] 00:09:42.178 } 00:09:42.178 } 00:09:42.178 }' 00:09:42.178 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.178 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:42.178 pt2 00:09:42.178 pt3' 00:09:42.178 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.438 02:24:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.438 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.438 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.438 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.438 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.438 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.438 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:42.438 [2024-10-13 02:24:01.030190] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.438 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.438 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4bf50fd7-9cc6-42c8-9392-bfa564483587 00:09:42.438 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4bf50fd7-9cc6-42c8-9392-bfa564483587 ']' 00:09:42.438 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:42.438 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.438 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.438 [2024-10-13 02:24:01.077854] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.438 [2024-10-13 02:24:01.077893] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.438 [2024-10-13 02:24:01.077977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.438 [2024-10-13 02:24:01.078037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.438 [2024-10-13 02:24:01.078060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:42.438 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.438 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.438 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.438 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.438 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:42.439 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.699 [2024-10-13 02:24:01.237674] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:42.699 [2024-10-13 02:24:01.239549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:42.699 [2024-10-13 02:24:01.239598] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:42.699 [2024-10-13 02:24:01.239659] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:42.699 [2024-10-13 02:24:01.239713] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:42.699 [2024-10-13 02:24:01.239734] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:42.699 [2024-10-13 02:24:01.239747] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.699 [2024-10-13 02:24:01.239766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:42.699 request: 00:09:42.699 { 00:09:42.699 "name": "raid_bdev1", 00:09:42.699 "raid_level": "concat", 00:09:42.699 "base_bdevs": [ 00:09:42.699 "malloc1", 00:09:42.699 "malloc2", 00:09:42.699 "malloc3" 00:09:42.699 ], 00:09:42.699 "strip_size_kb": 64, 00:09:42.699 "superblock": false, 00:09:42.699 "method": "bdev_raid_create", 00:09:42.699 "req_id": 1 00:09:42.699 } 00:09:42.699 Got JSON-RPC error response 00:09:42.699 response: 00:09:42.699 { 00:09:42.699 "code": -17, 00:09:42.699 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:42.699 } 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:42.699 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.700 [2024-10-13 02:24:01.297468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:42.700 [2024-10-13 02:24:01.297521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.700 [2024-10-13 02:24:01.297536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:42.700 [2024-10-13 02:24:01.297546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.700 [2024-10-13 02:24:01.299667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.700 [2024-10-13 02:24:01.299707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:42.700 [2024-10-13 02:24:01.299775] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:42.700 [2024-10-13 02:24:01.299806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:42.700 pt1 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.700 "name": "raid_bdev1", 00:09:42.700 "uuid": "4bf50fd7-9cc6-42c8-9392-bfa564483587", 00:09:42.700 "strip_size_kb": 64, 00:09:42.700 "state": "configuring", 00:09:42.700 "raid_level": "concat", 00:09:42.700 "superblock": true, 00:09:42.700 "num_base_bdevs": 3, 00:09:42.700 "num_base_bdevs_discovered": 1, 00:09:42.700 "num_base_bdevs_operational": 3, 00:09:42.700 "base_bdevs_list": [ 00:09:42.700 { 00:09:42.700 "name": "pt1", 00:09:42.700 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.700 "is_configured": true, 00:09:42.700 "data_offset": 2048, 00:09:42.700 "data_size": 63488 00:09:42.700 }, 00:09:42.700 { 00:09:42.700 "name": null, 00:09:42.700 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.700 "is_configured": false, 00:09:42.700 "data_offset": 2048, 00:09:42.700 "data_size": 63488 00:09:42.700 }, 00:09:42.700 { 00:09:42.700 "name": null, 00:09:42.700 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.700 "is_configured": false, 00:09:42.700 "data_offset": 2048, 00:09:42.700 "data_size": 63488 00:09:42.700 } 00:09:42.700 ] 00:09:42.700 }' 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.700 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.269 [2024-10-13 02:24:01.732769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:43.269 [2024-10-13 02:24:01.732853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.269 [2024-10-13 02:24:01.732890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:43.269 [2024-10-13 02:24:01.732905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.269 [2024-10-13 02:24:01.733304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.269 [2024-10-13 02:24:01.733333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:43.269 [2024-10-13 02:24:01.733428] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:43.269 [2024-10-13 02:24:01.733459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:43.269 pt2 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.269 [2024-10-13 02:24:01.744723] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.269 "name": "raid_bdev1", 00:09:43.269 "uuid": "4bf50fd7-9cc6-42c8-9392-bfa564483587", 00:09:43.269 "strip_size_kb": 64, 00:09:43.269 "state": "configuring", 00:09:43.269 "raid_level": "concat", 00:09:43.269 "superblock": true, 00:09:43.269 "num_base_bdevs": 3, 00:09:43.269 "num_base_bdevs_discovered": 1, 00:09:43.269 "num_base_bdevs_operational": 3, 00:09:43.269 "base_bdevs_list": [ 00:09:43.269 { 00:09:43.269 "name": "pt1", 00:09:43.269 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.269 "is_configured": true, 00:09:43.269 "data_offset": 2048, 00:09:43.269 "data_size": 63488 00:09:43.269 }, 00:09:43.269 { 00:09:43.269 "name": null, 00:09:43.269 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.269 "is_configured": false, 00:09:43.269 "data_offset": 0, 00:09:43.269 "data_size": 63488 00:09:43.269 }, 00:09:43.269 { 00:09:43.269 "name": null, 00:09:43.269 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.269 "is_configured": false, 00:09:43.269 "data_offset": 2048, 00:09:43.269 "data_size": 63488 00:09:43.269 } 00:09:43.269 ] 00:09:43.269 }' 00:09:43.269 02:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.270 02:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.530 [2024-10-13 02:24:02.188029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:43.530 [2024-10-13 02:24:02.188104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.530 [2024-10-13 02:24:02.188130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:43.530 [2024-10-13 02:24:02.188144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.530 [2024-10-13 02:24:02.188566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.530 [2024-10-13 02:24:02.188591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:43.530 [2024-10-13 02:24:02.188671] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:43.530 [2024-10-13 02:24:02.188697] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:43.530 pt2 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.530 [2024-10-13 02:24:02.199973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:43.530 [2024-10-13 02:24:02.200020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.530 [2024-10-13 02:24:02.200038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:43.530 [2024-10-13 02:24:02.200047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.530 [2024-10-13 02:24:02.200362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.530 [2024-10-13 02:24:02.200386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:43.530 [2024-10-13 02:24:02.200441] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:43.530 [2024-10-13 02:24:02.200473] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:43.530 [2024-10-13 02:24:02.200567] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:43.530 [2024-10-13 02:24:02.200580] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:43.530 [2024-10-13 02:24:02.200803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:43.530 [2024-10-13 02:24:02.200921] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:43.530 [2024-10-13 02:24:02.200937] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:43.530 [2024-10-13 02:24:02.201033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.530 pt3 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.530 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.790 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.790 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.790 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.790 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.790 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.790 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.790 "name": "raid_bdev1", 00:09:43.790 "uuid": "4bf50fd7-9cc6-42c8-9392-bfa564483587", 00:09:43.790 "strip_size_kb": 64, 00:09:43.790 "state": "online", 00:09:43.790 "raid_level": "concat", 00:09:43.790 "superblock": true, 00:09:43.790 "num_base_bdevs": 3, 00:09:43.790 "num_base_bdevs_discovered": 3, 00:09:43.790 "num_base_bdevs_operational": 3, 00:09:43.790 "base_bdevs_list": [ 00:09:43.790 { 00:09:43.790 "name": "pt1", 00:09:43.790 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.790 "is_configured": true, 00:09:43.790 "data_offset": 2048, 00:09:43.790 "data_size": 63488 00:09:43.790 }, 00:09:43.790 { 00:09:43.790 "name": "pt2", 00:09:43.790 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.790 "is_configured": true, 00:09:43.790 "data_offset": 2048, 00:09:43.790 "data_size": 63488 00:09:43.790 }, 00:09:43.790 { 00:09:43.790 "name": "pt3", 00:09:43.790 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.790 "is_configured": true, 00:09:43.790 "data_offset": 2048, 00:09:43.790 "data_size": 63488 00:09:43.790 } 00:09:43.790 ] 00:09:43.790 }' 00:09:43.790 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.790 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.050 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:44.050 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:44.050 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:44.050 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:44.050 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:44.050 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:44.050 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.051 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:44.051 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.051 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.051 [2024-10-13 02:24:02.611591] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.051 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.051 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.051 "name": "raid_bdev1", 00:09:44.051 "aliases": [ 00:09:44.051 "4bf50fd7-9cc6-42c8-9392-bfa564483587" 00:09:44.051 ], 00:09:44.051 "product_name": "Raid Volume", 00:09:44.051 "block_size": 512, 00:09:44.051 "num_blocks": 190464, 00:09:44.051 "uuid": "4bf50fd7-9cc6-42c8-9392-bfa564483587", 00:09:44.051 "assigned_rate_limits": { 00:09:44.051 "rw_ios_per_sec": 0, 00:09:44.051 "rw_mbytes_per_sec": 0, 00:09:44.051 "r_mbytes_per_sec": 0, 00:09:44.051 "w_mbytes_per_sec": 0 00:09:44.051 }, 00:09:44.051 "claimed": false, 00:09:44.051 "zoned": false, 00:09:44.051 "supported_io_types": { 00:09:44.051 "read": true, 00:09:44.051 "write": true, 00:09:44.051 "unmap": true, 00:09:44.051 "flush": true, 00:09:44.051 "reset": true, 00:09:44.051 "nvme_admin": false, 00:09:44.051 "nvme_io": false, 00:09:44.051 "nvme_io_md": false, 00:09:44.051 "write_zeroes": true, 00:09:44.051 "zcopy": false, 00:09:44.051 "get_zone_info": false, 00:09:44.051 "zone_management": false, 00:09:44.051 "zone_append": false, 00:09:44.051 "compare": false, 00:09:44.051 "compare_and_write": false, 00:09:44.051 "abort": false, 00:09:44.051 "seek_hole": false, 00:09:44.051 "seek_data": false, 00:09:44.051 "copy": false, 00:09:44.051 "nvme_iov_md": false 00:09:44.051 }, 00:09:44.051 "memory_domains": [ 00:09:44.051 { 00:09:44.051 "dma_device_id": "system", 00:09:44.051 "dma_device_type": 1 00:09:44.051 }, 00:09:44.051 { 00:09:44.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.051 "dma_device_type": 2 00:09:44.051 }, 00:09:44.051 { 00:09:44.051 "dma_device_id": "system", 00:09:44.051 "dma_device_type": 1 00:09:44.051 }, 00:09:44.051 { 00:09:44.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.051 "dma_device_type": 2 00:09:44.051 }, 00:09:44.051 { 00:09:44.051 "dma_device_id": "system", 00:09:44.051 "dma_device_type": 1 00:09:44.051 }, 00:09:44.051 { 00:09:44.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.051 "dma_device_type": 2 00:09:44.051 } 00:09:44.051 ], 00:09:44.051 "driver_specific": { 00:09:44.051 "raid": { 00:09:44.051 "uuid": "4bf50fd7-9cc6-42c8-9392-bfa564483587", 00:09:44.051 "strip_size_kb": 64, 00:09:44.051 "state": "online", 00:09:44.051 "raid_level": "concat", 00:09:44.051 "superblock": true, 00:09:44.051 "num_base_bdevs": 3, 00:09:44.051 "num_base_bdevs_discovered": 3, 00:09:44.051 "num_base_bdevs_operational": 3, 00:09:44.051 "base_bdevs_list": [ 00:09:44.051 { 00:09:44.051 "name": "pt1", 00:09:44.051 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.051 "is_configured": true, 00:09:44.051 "data_offset": 2048, 00:09:44.051 "data_size": 63488 00:09:44.051 }, 00:09:44.051 { 00:09:44.051 "name": "pt2", 00:09:44.051 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.051 "is_configured": true, 00:09:44.051 "data_offset": 2048, 00:09:44.051 "data_size": 63488 00:09:44.051 }, 00:09:44.051 { 00:09:44.051 "name": "pt3", 00:09:44.051 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.051 "is_configured": true, 00:09:44.051 "data_offset": 2048, 00:09:44.051 "data_size": 63488 00:09:44.051 } 00:09:44.051 ] 00:09:44.051 } 00:09:44.051 } 00:09:44.051 }' 00:09:44.051 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.051 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:44.051 pt2 00:09:44.051 pt3' 00:09:44.051 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.311 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:44.311 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.311 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:44.311 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.311 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.311 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.311 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.311 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.311 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.311 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.311 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:44.311 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.311 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.311 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.311 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.311 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.311 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.312 [2024-10-13 02:24:02.887075] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4bf50fd7-9cc6-42c8-9392-bfa564483587 '!=' 4bf50fd7-9cc6-42c8-9392-bfa564483587 ']' 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77841 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 77841 ']' 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 77841 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77841 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:44.312 killing process with pid 77841 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77841' 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 77841 00:09:44.312 [2024-10-13 02:24:02.965777] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.312 [2024-10-13 02:24:02.965905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.312 [2024-10-13 02:24:02.965976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.312 [2024-10-13 02:24:02.965988] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:44.312 02:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 77841 00:09:44.572 [2024-10-13 02:24:02.999658] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.572 02:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:44.572 00:09:44.572 real 0m3.951s 00:09:44.572 user 0m6.152s 00:09:44.572 sys 0m0.911s 00:09:44.572 02:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.572 02:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.572 ************************************ 00:09:44.572 END TEST raid_superblock_test 00:09:44.572 ************************************ 00:09:44.832 02:24:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:44.832 02:24:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:44.832 02:24:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.832 02:24:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.832 ************************************ 00:09:44.832 START TEST raid_read_error_test 00:09:44.832 ************************************ 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cBwU27sDuH 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78082 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78082 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 78082 ']' 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.832 02:24:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.832 [2024-10-13 02:24:03.431315] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:44.832 [2024-10-13 02:24:03.431444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78082 ] 00:09:45.092 [2024-10-13 02:24:03.577270] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.092 [2024-10-13 02:24:03.622645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.092 [2024-10-13 02:24:03.665508] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.092 [2024-10-13 02:24:03.665559] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.662 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.662 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:45.662 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.662 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:45.662 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.662 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.662 BaseBdev1_malloc 00:09:45.662 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.662 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:45.662 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.662 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.662 true 00:09:45.662 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.662 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:45.662 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.662 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.662 [2024-10-13 02:24:04.311509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:45.662 [2024-10-13 02:24:04.311580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.662 [2024-10-13 02:24:04.311611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:45.663 [2024-10-13 02:24:04.311631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.663 [2024-10-13 02:24:04.313687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.663 [2024-10-13 02:24:04.313729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:45.663 BaseBdev1 00:09:45.663 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.663 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.663 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:45.663 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.663 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.663 BaseBdev2_malloc 00:09:45.663 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.663 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:45.663 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.663 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.923 true 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.923 [2024-10-13 02:24:04.360446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:45.923 [2024-10-13 02:24:04.360510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.923 [2024-10-13 02:24:04.360530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:45.923 [2024-10-13 02:24:04.360538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.923 [2024-10-13 02:24:04.362535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.923 [2024-10-13 02:24:04.362570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:45.923 BaseBdev2 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.923 BaseBdev3_malloc 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.923 true 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.923 [2024-10-13 02:24:04.401023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:45.923 [2024-10-13 02:24:04.401076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.923 [2024-10-13 02:24:04.401093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:45.923 [2024-10-13 02:24:04.401103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.923 [2024-10-13 02:24:04.403077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.923 [2024-10-13 02:24:04.403111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:45.923 BaseBdev3 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.923 [2024-10-13 02:24:04.413077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.923 [2024-10-13 02:24:04.414863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.923 [2024-10-13 02:24:04.414953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.923 [2024-10-13 02:24:04.415138] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:45.923 [2024-10-13 02:24:04.415158] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:45.923 [2024-10-13 02:24:04.415410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:45.923 [2024-10-13 02:24:04.415556] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:45.923 [2024-10-13 02:24:04.415572] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:45.923 [2024-10-13 02:24:04.415710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.923 "name": "raid_bdev1", 00:09:45.923 "uuid": "d62a6fe8-967e-41d8-86b5-a224112c5c04", 00:09:45.923 "strip_size_kb": 64, 00:09:45.923 "state": "online", 00:09:45.923 "raid_level": "concat", 00:09:45.923 "superblock": true, 00:09:45.923 "num_base_bdevs": 3, 00:09:45.923 "num_base_bdevs_discovered": 3, 00:09:45.923 "num_base_bdevs_operational": 3, 00:09:45.923 "base_bdevs_list": [ 00:09:45.923 { 00:09:45.923 "name": "BaseBdev1", 00:09:45.923 "uuid": "b7ac9124-55f0-5195-9480-057dacf07e62", 00:09:45.923 "is_configured": true, 00:09:45.923 "data_offset": 2048, 00:09:45.923 "data_size": 63488 00:09:45.923 }, 00:09:45.923 { 00:09:45.923 "name": "BaseBdev2", 00:09:45.923 "uuid": "ce74bd9b-6c52-5fe3-a633-244bc84cc5d9", 00:09:45.923 "is_configured": true, 00:09:45.923 "data_offset": 2048, 00:09:45.923 "data_size": 63488 00:09:45.923 }, 00:09:45.923 { 00:09:45.923 "name": "BaseBdev3", 00:09:45.923 "uuid": "851983b1-3e97-5ca9-b90b-c8be7bddf2e9", 00:09:45.923 "is_configured": true, 00:09:45.923 "data_offset": 2048, 00:09:45.923 "data_size": 63488 00:09:45.923 } 00:09:45.923 ] 00:09:45.923 }' 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.923 02:24:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.183 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:46.183 02:24:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:46.443 [2024-10-13 02:24:04.920643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.416 "name": "raid_bdev1", 00:09:47.416 "uuid": "d62a6fe8-967e-41d8-86b5-a224112c5c04", 00:09:47.416 "strip_size_kb": 64, 00:09:47.416 "state": "online", 00:09:47.416 "raid_level": "concat", 00:09:47.416 "superblock": true, 00:09:47.416 "num_base_bdevs": 3, 00:09:47.416 "num_base_bdevs_discovered": 3, 00:09:47.416 "num_base_bdevs_operational": 3, 00:09:47.416 "base_bdevs_list": [ 00:09:47.416 { 00:09:47.416 "name": "BaseBdev1", 00:09:47.416 "uuid": "b7ac9124-55f0-5195-9480-057dacf07e62", 00:09:47.416 "is_configured": true, 00:09:47.416 "data_offset": 2048, 00:09:47.416 "data_size": 63488 00:09:47.416 }, 00:09:47.416 { 00:09:47.416 "name": "BaseBdev2", 00:09:47.416 "uuid": "ce74bd9b-6c52-5fe3-a633-244bc84cc5d9", 00:09:47.416 "is_configured": true, 00:09:47.416 "data_offset": 2048, 00:09:47.416 "data_size": 63488 00:09:47.416 }, 00:09:47.416 { 00:09:47.416 "name": "BaseBdev3", 00:09:47.416 "uuid": "851983b1-3e97-5ca9-b90b-c8be7bddf2e9", 00:09:47.416 "is_configured": true, 00:09:47.416 "data_offset": 2048, 00:09:47.416 "data_size": 63488 00:09:47.416 } 00:09:47.416 ] 00:09:47.416 }' 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.416 02:24:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.676 02:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:47.676 02:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.676 02:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.676 [2024-10-13 02:24:06.284437] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.676 [2024-10-13 02:24:06.284490] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.676 [2024-10-13 02:24:06.287067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.676 [2024-10-13 02:24:06.287130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.676 [2024-10-13 02:24:06.287164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.676 [2024-10-13 02:24:06.287175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:47.676 02:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.676 02:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78082 00:09:47.676 02:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 78082 ']' 00:09:47.676 02:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 78082 00:09:47.676 { 00:09:47.676 "results": [ 00:09:47.676 { 00:09:47.676 "job": "raid_bdev1", 00:09:47.676 "core_mask": "0x1", 00:09:47.676 "workload": "randrw", 00:09:47.676 "percentage": 50, 00:09:47.676 "status": "finished", 00:09:47.676 "queue_depth": 1, 00:09:47.676 "io_size": 131072, 00:09:47.676 "runtime": 1.364684, 00:09:47.676 "iops": 16735.742486905394, 00:09:47.676 "mibps": 2091.9678108631742, 00:09:47.676 "io_failed": 1, 00:09:47.676 "io_timeout": 0, 00:09:47.676 "avg_latency_us": 82.8611635145573, 00:09:47.676 "min_latency_us": 24.929257641921396, 00:09:47.676 "max_latency_us": 1337.907423580786 00:09:47.676 } 00:09:47.676 ], 00:09:47.676 "core_count": 1 00:09:47.676 } 00:09:47.676 02:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:47.676 02:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:47.676 02:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78082 00:09:47.676 02:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:47.676 02:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:47.676 killing process with pid 78082 00:09:47.676 02:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78082' 00:09:47.676 02:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 78082 00:09:47.676 [2024-10-13 02:24:06.333379] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.676 02:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 78082 00:09:47.935 [2024-10-13 02:24:06.357917] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.935 02:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cBwU27sDuH 00:09:47.935 02:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:47.935 02:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:47.935 02:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:47.935 02:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:47.935 02:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.935 02:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:47.935 02:24:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:47.935 00:09:47.935 real 0m3.275s 00:09:47.935 user 0m4.101s 00:09:47.935 sys 0m0.572s 00:09:47.935 02:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.935 02:24:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.935 ************************************ 00:09:47.935 END TEST raid_read_error_test 00:09:47.935 ************************************ 00:09:48.195 02:24:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:48.195 02:24:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:48.195 02:24:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.195 02:24:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.195 ************************************ 00:09:48.195 START TEST raid_write_error_test 00:09:48.195 ************************************ 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ODJu6Xdcd8 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78212 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78212 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78212 ']' 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.195 02:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.196 02:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.196 02:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.196 02:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.196 [2024-10-13 02:24:06.774340] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:48.196 [2024-10-13 02:24:06.774546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78212 ] 00:09:48.455 [2024-10-13 02:24:06.922082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.455 [2024-10-13 02:24:06.969666] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.455 [2024-10-13 02:24:07.011746] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.455 [2024-10-13 02:24:07.011890] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.032 BaseBdev1_malloc 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.032 true 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.032 [2024-10-13 02:24:07.658054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:49.032 [2024-10-13 02:24:07.658210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.032 [2024-10-13 02:24:07.658255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:49.032 [2024-10-13 02:24:07.658285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.032 [2024-10-13 02:24:07.660436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.032 [2024-10-13 02:24:07.660510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:49.032 BaseBdev1 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.032 BaseBdev2_malloc 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.032 true 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.032 [2024-10-13 02:24:07.698258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:49.032 [2024-10-13 02:24:07.698392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.032 [2024-10-13 02:24:07.698439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:49.032 [2024-10-13 02:24:07.698471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.032 [2024-10-13 02:24:07.700656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.032 [2024-10-13 02:24:07.700730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:49.032 BaseBdev2 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.032 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.305 BaseBdev3_malloc 00:09:49.305 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.305 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:49.305 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.305 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.305 true 00:09:49.305 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.305 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:49.305 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.305 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.305 [2024-10-13 02:24:07.726791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:49.305 [2024-10-13 02:24:07.726918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.305 [2024-10-13 02:24:07.726956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:49.305 [2024-10-13 02:24:07.726984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.305 [2024-10-13 02:24:07.729038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.305 [2024-10-13 02:24:07.729110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:49.305 BaseBdev3 00:09:49.305 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.305 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:49.305 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.305 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.305 [2024-10-13 02:24:07.738867] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.306 [2024-10-13 02:24:07.740823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.306 [2024-10-13 02:24:07.740956] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.306 [2024-10-13 02:24:07.741157] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:49.306 [2024-10-13 02:24:07.741215] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:49.306 [2024-10-13 02:24:07.741497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:49.306 [2024-10-13 02:24:07.741667] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:49.306 [2024-10-13 02:24:07.741710] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:49.306 [2024-10-13 02:24:07.741891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.306 "name": "raid_bdev1", 00:09:49.306 "uuid": "1608133b-4d8d-4beb-83b7-504f2eb617c6", 00:09:49.306 "strip_size_kb": 64, 00:09:49.306 "state": "online", 00:09:49.306 "raid_level": "concat", 00:09:49.306 "superblock": true, 00:09:49.306 "num_base_bdevs": 3, 00:09:49.306 "num_base_bdevs_discovered": 3, 00:09:49.306 "num_base_bdevs_operational": 3, 00:09:49.306 "base_bdevs_list": [ 00:09:49.306 { 00:09:49.306 "name": "BaseBdev1", 00:09:49.306 "uuid": "676184fe-d109-59f7-a183-c92a7b678e47", 00:09:49.306 "is_configured": true, 00:09:49.306 "data_offset": 2048, 00:09:49.306 "data_size": 63488 00:09:49.306 }, 00:09:49.306 { 00:09:49.306 "name": "BaseBdev2", 00:09:49.306 "uuid": "2cde6054-43ce-5486-85e7-3b23f743e88e", 00:09:49.306 "is_configured": true, 00:09:49.306 "data_offset": 2048, 00:09:49.306 "data_size": 63488 00:09:49.306 }, 00:09:49.306 { 00:09:49.306 "name": "BaseBdev3", 00:09:49.306 "uuid": "a20eb502-352c-5490-9e70-be60d7a4df65", 00:09:49.306 "is_configured": true, 00:09:49.306 "data_offset": 2048, 00:09:49.306 "data_size": 63488 00:09:49.306 } 00:09:49.306 ] 00:09:49.306 }' 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.306 02:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.564 02:24:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:49.564 02:24:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:49.823 [2024-10-13 02:24:08.314416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.760 "name": "raid_bdev1", 00:09:50.760 "uuid": "1608133b-4d8d-4beb-83b7-504f2eb617c6", 00:09:50.760 "strip_size_kb": 64, 00:09:50.760 "state": "online", 00:09:50.760 "raid_level": "concat", 00:09:50.760 "superblock": true, 00:09:50.760 "num_base_bdevs": 3, 00:09:50.760 "num_base_bdevs_discovered": 3, 00:09:50.760 "num_base_bdevs_operational": 3, 00:09:50.760 "base_bdevs_list": [ 00:09:50.760 { 00:09:50.760 "name": "BaseBdev1", 00:09:50.760 "uuid": "676184fe-d109-59f7-a183-c92a7b678e47", 00:09:50.760 "is_configured": true, 00:09:50.760 "data_offset": 2048, 00:09:50.760 "data_size": 63488 00:09:50.760 }, 00:09:50.760 { 00:09:50.760 "name": "BaseBdev2", 00:09:50.760 "uuid": "2cde6054-43ce-5486-85e7-3b23f743e88e", 00:09:50.760 "is_configured": true, 00:09:50.760 "data_offset": 2048, 00:09:50.760 "data_size": 63488 00:09:50.760 }, 00:09:50.760 { 00:09:50.760 "name": "BaseBdev3", 00:09:50.760 "uuid": "a20eb502-352c-5490-9e70-be60d7a4df65", 00:09:50.760 "is_configured": true, 00:09:50.760 "data_offset": 2048, 00:09:50.760 "data_size": 63488 00:09:50.760 } 00:09:50.760 ] 00:09:50.760 }' 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.760 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.018 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.018 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.018 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.018 [2024-10-13 02:24:09.698236] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.018 [2024-10-13 02:24:09.698367] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.276 [2024-10-13 02:24:09.700887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.276 [2024-10-13 02:24:09.701004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.276 [2024-10-13 02:24:09.701056] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.276 [2024-10-13 02:24:09.701097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:51.276 { 00:09:51.276 "results": [ 00:09:51.276 { 00:09:51.276 "job": "raid_bdev1", 00:09:51.276 "core_mask": "0x1", 00:09:51.276 "workload": "randrw", 00:09:51.276 "percentage": 50, 00:09:51.276 "status": "finished", 00:09:51.276 "queue_depth": 1, 00:09:51.276 "io_size": 131072, 00:09:51.276 "runtime": 1.384696, 00:09:51.276 "iops": 16818.13192209698, 00:09:51.276 "mibps": 2102.2664902621227, 00:09:51.276 "io_failed": 1, 00:09:51.276 "io_timeout": 0, 00:09:51.276 "avg_latency_us": 82.45115821120642, 00:09:51.276 "min_latency_us": 25.7117903930131, 00:09:51.276 "max_latency_us": 1373.6803493449781 00:09:51.276 } 00:09:51.276 ], 00:09:51.276 "core_count": 1 00:09:51.276 } 00:09:51.276 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.276 02:24:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78212 00:09:51.276 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78212 ']' 00:09:51.276 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78212 00:09:51.276 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:51.276 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:51.276 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78212 00:09:51.276 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:51.276 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:51.276 killing process with pid 78212 00:09:51.276 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78212' 00:09:51.276 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78212 00:09:51.276 [2024-10-13 02:24:09.747337] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.276 02:24:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78212 00:09:51.276 [2024-10-13 02:24:09.772485] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.534 02:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ODJu6Xdcd8 00:09:51.534 02:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:51.534 02:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:51.534 02:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:51.534 02:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:51.534 02:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:51.534 02:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:51.534 02:24:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:51.534 ************************************ 00:09:51.534 END TEST raid_write_error_test 00:09:51.534 ************************************ 00:09:51.535 00:09:51.535 real 0m3.350s 00:09:51.535 user 0m4.257s 00:09:51.535 sys 0m0.558s 00:09:51.535 02:24:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.535 02:24:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.535 02:24:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:51.535 02:24:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:51.535 02:24:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:51.535 02:24:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.535 02:24:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.535 ************************************ 00:09:51.535 START TEST raid_state_function_test 00:09:51.535 ************************************ 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:51.535 Process raid pid: 78339 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78339 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78339' 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78339 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78339 ']' 00:09:51.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.535 02:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.535 [2024-10-13 02:24:10.190510] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:51.535 [2024-10-13 02:24:10.190620] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.793 [2024-10-13 02:24:10.330804] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.793 [2024-10-13 02:24:10.378749] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.793 [2024-10-13 02:24:10.422257] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.793 [2024-10-13 02:24:10.422296] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.730 [2024-10-13 02:24:11.056056] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:52.730 [2024-10-13 02:24:11.056205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:52.730 [2024-10-13 02:24:11.056237] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:52.730 [2024-10-13 02:24:11.056261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:52.730 [2024-10-13 02:24:11.056279] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:52.730 [2024-10-13 02:24:11.056302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.730 "name": "Existed_Raid", 00:09:52.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.730 "strip_size_kb": 0, 00:09:52.730 "state": "configuring", 00:09:52.730 "raid_level": "raid1", 00:09:52.730 "superblock": false, 00:09:52.730 "num_base_bdevs": 3, 00:09:52.730 "num_base_bdevs_discovered": 0, 00:09:52.730 "num_base_bdevs_operational": 3, 00:09:52.730 "base_bdevs_list": [ 00:09:52.730 { 00:09:52.730 "name": "BaseBdev1", 00:09:52.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.730 "is_configured": false, 00:09:52.730 "data_offset": 0, 00:09:52.730 "data_size": 0 00:09:52.730 }, 00:09:52.730 { 00:09:52.730 "name": "BaseBdev2", 00:09:52.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.730 "is_configured": false, 00:09:52.730 "data_offset": 0, 00:09:52.730 "data_size": 0 00:09:52.730 }, 00:09:52.730 { 00:09:52.730 "name": "BaseBdev3", 00:09:52.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.730 "is_configured": false, 00:09:52.730 "data_offset": 0, 00:09:52.730 "data_size": 0 00:09:52.730 } 00:09:52.730 ] 00:09:52.730 }' 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.730 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.990 [2024-10-13 02:24:11.451348] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:52.990 [2024-10-13 02:24:11.451480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.990 [2024-10-13 02:24:11.463286] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:52.990 [2024-10-13 02:24:11.463382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:52.990 [2024-10-13 02:24:11.463409] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:52.990 [2024-10-13 02:24:11.463431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:52.990 [2024-10-13 02:24:11.463450] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:52.990 [2024-10-13 02:24:11.463470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.990 [2024-10-13 02:24:11.484130] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.990 BaseBdev1 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.990 [ 00:09:52.990 { 00:09:52.990 "name": "BaseBdev1", 00:09:52.990 "aliases": [ 00:09:52.990 "ff0bc7e2-88d8-439e-ae05-bbdfcb6145c4" 00:09:52.990 ], 00:09:52.990 "product_name": "Malloc disk", 00:09:52.990 "block_size": 512, 00:09:52.990 "num_blocks": 65536, 00:09:52.990 "uuid": "ff0bc7e2-88d8-439e-ae05-bbdfcb6145c4", 00:09:52.990 "assigned_rate_limits": { 00:09:52.990 "rw_ios_per_sec": 0, 00:09:52.990 "rw_mbytes_per_sec": 0, 00:09:52.990 "r_mbytes_per_sec": 0, 00:09:52.990 "w_mbytes_per_sec": 0 00:09:52.990 }, 00:09:52.990 "claimed": true, 00:09:52.990 "claim_type": "exclusive_write", 00:09:52.990 "zoned": false, 00:09:52.990 "supported_io_types": { 00:09:52.990 "read": true, 00:09:52.990 "write": true, 00:09:52.990 "unmap": true, 00:09:52.990 "flush": true, 00:09:52.990 "reset": true, 00:09:52.990 "nvme_admin": false, 00:09:52.990 "nvme_io": false, 00:09:52.990 "nvme_io_md": false, 00:09:52.990 "write_zeroes": true, 00:09:52.990 "zcopy": true, 00:09:52.990 "get_zone_info": false, 00:09:52.990 "zone_management": false, 00:09:52.990 "zone_append": false, 00:09:52.990 "compare": false, 00:09:52.990 "compare_and_write": false, 00:09:52.990 "abort": true, 00:09:52.990 "seek_hole": false, 00:09:52.990 "seek_data": false, 00:09:52.990 "copy": true, 00:09:52.990 "nvme_iov_md": false 00:09:52.990 }, 00:09:52.990 "memory_domains": [ 00:09:52.990 { 00:09:52.990 "dma_device_id": "system", 00:09:52.990 "dma_device_type": 1 00:09:52.990 }, 00:09:52.990 { 00:09:52.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.990 "dma_device_type": 2 00:09:52.990 } 00:09:52.990 ], 00:09:52.990 "driver_specific": {} 00:09:52.990 } 00:09:52.990 ] 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.990 "name": "Existed_Raid", 00:09:52.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.990 "strip_size_kb": 0, 00:09:52.990 "state": "configuring", 00:09:52.990 "raid_level": "raid1", 00:09:52.990 "superblock": false, 00:09:52.990 "num_base_bdevs": 3, 00:09:52.990 "num_base_bdevs_discovered": 1, 00:09:52.990 "num_base_bdevs_operational": 3, 00:09:52.990 "base_bdevs_list": [ 00:09:52.990 { 00:09:52.990 "name": "BaseBdev1", 00:09:52.990 "uuid": "ff0bc7e2-88d8-439e-ae05-bbdfcb6145c4", 00:09:52.990 "is_configured": true, 00:09:52.990 "data_offset": 0, 00:09:52.990 "data_size": 65536 00:09:52.990 }, 00:09:52.990 { 00:09:52.990 "name": "BaseBdev2", 00:09:52.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.990 "is_configured": false, 00:09:52.990 "data_offset": 0, 00:09:52.990 "data_size": 0 00:09:52.990 }, 00:09:52.990 { 00:09:52.990 "name": "BaseBdev3", 00:09:52.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.990 "is_configured": false, 00:09:52.990 "data_offset": 0, 00:09:52.990 "data_size": 0 00:09:52.990 } 00:09:52.990 ] 00:09:52.990 }' 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.990 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.270 [2024-10-13 02:24:11.927480] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.270 [2024-10-13 02:24:11.927630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.270 [2024-10-13 02:24:11.939487] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.270 [2024-10-13 02:24:11.941432] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.270 [2024-10-13 02:24:11.941509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.270 [2024-10-13 02:24:11.941538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.270 [2024-10-13 02:24:11.941563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.270 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.529 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.529 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.529 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.529 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.529 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.529 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.529 "name": "Existed_Raid", 00:09:53.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.529 "strip_size_kb": 0, 00:09:53.529 "state": "configuring", 00:09:53.529 "raid_level": "raid1", 00:09:53.529 "superblock": false, 00:09:53.529 "num_base_bdevs": 3, 00:09:53.529 "num_base_bdevs_discovered": 1, 00:09:53.529 "num_base_bdevs_operational": 3, 00:09:53.529 "base_bdevs_list": [ 00:09:53.529 { 00:09:53.529 "name": "BaseBdev1", 00:09:53.529 "uuid": "ff0bc7e2-88d8-439e-ae05-bbdfcb6145c4", 00:09:53.529 "is_configured": true, 00:09:53.529 "data_offset": 0, 00:09:53.529 "data_size": 65536 00:09:53.529 }, 00:09:53.529 { 00:09:53.529 "name": "BaseBdev2", 00:09:53.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.529 "is_configured": false, 00:09:53.529 "data_offset": 0, 00:09:53.529 "data_size": 0 00:09:53.529 }, 00:09:53.529 { 00:09:53.529 "name": "BaseBdev3", 00:09:53.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.529 "is_configured": false, 00:09:53.529 "data_offset": 0, 00:09:53.529 "data_size": 0 00:09:53.529 } 00:09:53.529 ] 00:09:53.529 }' 00:09:53.529 02:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.529 02:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.788 [2024-10-13 02:24:12.377093] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.788 BaseBdev2 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.788 [ 00:09:53.788 { 00:09:53.788 "name": "BaseBdev2", 00:09:53.788 "aliases": [ 00:09:53.788 "f5d4ae47-b306-49c9-8e40-b82be7a801ed" 00:09:53.788 ], 00:09:53.788 "product_name": "Malloc disk", 00:09:53.788 "block_size": 512, 00:09:53.788 "num_blocks": 65536, 00:09:53.788 "uuid": "f5d4ae47-b306-49c9-8e40-b82be7a801ed", 00:09:53.788 "assigned_rate_limits": { 00:09:53.788 "rw_ios_per_sec": 0, 00:09:53.788 "rw_mbytes_per_sec": 0, 00:09:53.788 "r_mbytes_per_sec": 0, 00:09:53.788 "w_mbytes_per_sec": 0 00:09:53.788 }, 00:09:53.788 "claimed": true, 00:09:53.788 "claim_type": "exclusive_write", 00:09:53.788 "zoned": false, 00:09:53.788 "supported_io_types": { 00:09:53.788 "read": true, 00:09:53.788 "write": true, 00:09:53.788 "unmap": true, 00:09:53.788 "flush": true, 00:09:53.788 "reset": true, 00:09:53.788 "nvme_admin": false, 00:09:53.788 "nvme_io": false, 00:09:53.788 "nvme_io_md": false, 00:09:53.788 "write_zeroes": true, 00:09:53.788 "zcopy": true, 00:09:53.788 "get_zone_info": false, 00:09:53.788 "zone_management": false, 00:09:53.788 "zone_append": false, 00:09:53.788 "compare": false, 00:09:53.788 "compare_and_write": false, 00:09:53.788 "abort": true, 00:09:53.788 "seek_hole": false, 00:09:53.788 "seek_data": false, 00:09:53.788 "copy": true, 00:09:53.788 "nvme_iov_md": false 00:09:53.788 }, 00:09:53.788 "memory_domains": [ 00:09:53.788 { 00:09:53.788 "dma_device_id": "system", 00:09:53.788 "dma_device_type": 1 00:09:53.788 }, 00:09:53.788 { 00:09:53.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.788 "dma_device_type": 2 00:09:53.788 } 00:09:53.788 ], 00:09:53.788 "driver_specific": {} 00:09:53.788 } 00:09:53.788 ] 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.788 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.047 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.047 "name": "Existed_Raid", 00:09:54.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.047 "strip_size_kb": 0, 00:09:54.047 "state": "configuring", 00:09:54.047 "raid_level": "raid1", 00:09:54.047 "superblock": false, 00:09:54.047 "num_base_bdevs": 3, 00:09:54.047 "num_base_bdevs_discovered": 2, 00:09:54.047 "num_base_bdevs_operational": 3, 00:09:54.047 "base_bdevs_list": [ 00:09:54.047 { 00:09:54.047 "name": "BaseBdev1", 00:09:54.047 "uuid": "ff0bc7e2-88d8-439e-ae05-bbdfcb6145c4", 00:09:54.047 "is_configured": true, 00:09:54.047 "data_offset": 0, 00:09:54.047 "data_size": 65536 00:09:54.047 }, 00:09:54.047 { 00:09:54.047 "name": "BaseBdev2", 00:09:54.047 "uuid": "f5d4ae47-b306-49c9-8e40-b82be7a801ed", 00:09:54.047 "is_configured": true, 00:09:54.047 "data_offset": 0, 00:09:54.047 "data_size": 65536 00:09:54.047 }, 00:09:54.047 { 00:09:54.047 "name": "BaseBdev3", 00:09:54.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.047 "is_configured": false, 00:09:54.047 "data_offset": 0, 00:09:54.047 "data_size": 0 00:09:54.047 } 00:09:54.047 ] 00:09:54.047 }' 00:09:54.047 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.047 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.307 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:54.307 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.307 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.307 [2024-10-13 02:24:12.907354] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:54.307 BaseBdev3 00:09:54.308 [2024-10-13 02:24:12.907503] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:54.308 [2024-10-13 02:24:12.907527] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:54.308 [2024-10-13 02:24:12.907804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:54.308 [2024-10-13 02:24:12.907962] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:54.308 [2024-10-13 02:24:12.907973] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:54.308 [2024-10-13 02:24:12.908188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.308 [ 00:09:54.308 { 00:09:54.308 "name": "BaseBdev3", 00:09:54.308 "aliases": [ 00:09:54.308 "30893935-5681-4901-be12-a96a6f8bd90d" 00:09:54.308 ], 00:09:54.308 "product_name": "Malloc disk", 00:09:54.308 "block_size": 512, 00:09:54.308 "num_blocks": 65536, 00:09:54.308 "uuid": "30893935-5681-4901-be12-a96a6f8bd90d", 00:09:54.308 "assigned_rate_limits": { 00:09:54.308 "rw_ios_per_sec": 0, 00:09:54.308 "rw_mbytes_per_sec": 0, 00:09:54.308 "r_mbytes_per_sec": 0, 00:09:54.308 "w_mbytes_per_sec": 0 00:09:54.308 }, 00:09:54.308 "claimed": true, 00:09:54.308 "claim_type": "exclusive_write", 00:09:54.308 "zoned": false, 00:09:54.308 "supported_io_types": { 00:09:54.308 "read": true, 00:09:54.308 "write": true, 00:09:54.308 "unmap": true, 00:09:54.308 "flush": true, 00:09:54.308 "reset": true, 00:09:54.308 "nvme_admin": false, 00:09:54.308 "nvme_io": false, 00:09:54.308 "nvme_io_md": false, 00:09:54.308 "write_zeroes": true, 00:09:54.308 "zcopy": true, 00:09:54.308 "get_zone_info": false, 00:09:54.308 "zone_management": false, 00:09:54.308 "zone_append": false, 00:09:54.308 "compare": false, 00:09:54.308 "compare_and_write": false, 00:09:54.308 "abort": true, 00:09:54.308 "seek_hole": false, 00:09:54.308 "seek_data": false, 00:09:54.308 "copy": true, 00:09:54.308 "nvme_iov_md": false 00:09:54.308 }, 00:09:54.308 "memory_domains": [ 00:09:54.308 { 00:09:54.308 "dma_device_id": "system", 00:09:54.308 "dma_device_type": 1 00:09:54.308 }, 00:09:54.308 { 00:09:54.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.308 "dma_device_type": 2 00:09:54.308 } 00:09:54.308 ], 00:09:54.308 "driver_specific": {} 00:09:54.308 } 00:09:54.308 ] 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.308 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.568 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.568 "name": "Existed_Raid", 00:09:54.568 "uuid": "331f316c-62ca-4a56-9002-71ef4d17f438", 00:09:54.568 "strip_size_kb": 0, 00:09:54.568 "state": "online", 00:09:54.568 "raid_level": "raid1", 00:09:54.568 "superblock": false, 00:09:54.568 "num_base_bdevs": 3, 00:09:54.568 "num_base_bdevs_discovered": 3, 00:09:54.568 "num_base_bdevs_operational": 3, 00:09:54.568 "base_bdevs_list": [ 00:09:54.568 { 00:09:54.568 "name": "BaseBdev1", 00:09:54.568 "uuid": "ff0bc7e2-88d8-439e-ae05-bbdfcb6145c4", 00:09:54.568 "is_configured": true, 00:09:54.568 "data_offset": 0, 00:09:54.568 "data_size": 65536 00:09:54.568 }, 00:09:54.568 { 00:09:54.568 "name": "BaseBdev2", 00:09:54.568 "uuid": "f5d4ae47-b306-49c9-8e40-b82be7a801ed", 00:09:54.568 "is_configured": true, 00:09:54.568 "data_offset": 0, 00:09:54.568 "data_size": 65536 00:09:54.568 }, 00:09:54.568 { 00:09:54.568 "name": "BaseBdev3", 00:09:54.568 "uuid": "30893935-5681-4901-be12-a96a6f8bd90d", 00:09:54.568 "is_configured": true, 00:09:54.568 "data_offset": 0, 00:09:54.568 "data_size": 65536 00:09:54.568 } 00:09:54.568 ] 00:09:54.568 }' 00:09:54.568 02:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.568 02:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.828 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:54.828 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:54.828 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:54.828 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:54.828 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:54.828 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:54.828 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:54.828 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:54.828 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.828 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.828 [2024-10-13 02:24:13.430970] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:54.828 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.828 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:54.828 "name": "Existed_Raid", 00:09:54.828 "aliases": [ 00:09:54.828 "331f316c-62ca-4a56-9002-71ef4d17f438" 00:09:54.828 ], 00:09:54.828 "product_name": "Raid Volume", 00:09:54.828 "block_size": 512, 00:09:54.828 "num_blocks": 65536, 00:09:54.828 "uuid": "331f316c-62ca-4a56-9002-71ef4d17f438", 00:09:54.828 "assigned_rate_limits": { 00:09:54.828 "rw_ios_per_sec": 0, 00:09:54.828 "rw_mbytes_per_sec": 0, 00:09:54.828 "r_mbytes_per_sec": 0, 00:09:54.828 "w_mbytes_per_sec": 0 00:09:54.828 }, 00:09:54.828 "claimed": false, 00:09:54.828 "zoned": false, 00:09:54.828 "supported_io_types": { 00:09:54.828 "read": true, 00:09:54.828 "write": true, 00:09:54.828 "unmap": false, 00:09:54.828 "flush": false, 00:09:54.828 "reset": true, 00:09:54.828 "nvme_admin": false, 00:09:54.828 "nvme_io": false, 00:09:54.828 "nvme_io_md": false, 00:09:54.828 "write_zeroes": true, 00:09:54.828 "zcopy": false, 00:09:54.828 "get_zone_info": false, 00:09:54.828 "zone_management": false, 00:09:54.828 "zone_append": false, 00:09:54.828 "compare": false, 00:09:54.828 "compare_and_write": false, 00:09:54.828 "abort": false, 00:09:54.828 "seek_hole": false, 00:09:54.828 "seek_data": false, 00:09:54.828 "copy": false, 00:09:54.828 "nvme_iov_md": false 00:09:54.828 }, 00:09:54.828 "memory_domains": [ 00:09:54.828 { 00:09:54.828 "dma_device_id": "system", 00:09:54.828 "dma_device_type": 1 00:09:54.828 }, 00:09:54.828 { 00:09:54.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.828 "dma_device_type": 2 00:09:54.828 }, 00:09:54.828 { 00:09:54.828 "dma_device_id": "system", 00:09:54.828 "dma_device_type": 1 00:09:54.828 }, 00:09:54.828 { 00:09:54.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.828 "dma_device_type": 2 00:09:54.828 }, 00:09:54.828 { 00:09:54.828 "dma_device_id": "system", 00:09:54.828 "dma_device_type": 1 00:09:54.828 }, 00:09:54.828 { 00:09:54.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.828 "dma_device_type": 2 00:09:54.828 } 00:09:54.828 ], 00:09:54.828 "driver_specific": { 00:09:54.828 "raid": { 00:09:54.828 "uuid": "331f316c-62ca-4a56-9002-71ef4d17f438", 00:09:54.828 "strip_size_kb": 0, 00:09:54.828 "state": "online", 00:09:54.828 "raid_level": "raid1", 00:09:54.828 "superblock": false, 00:09:54.828 "num_base_bdevs": 3, 00:09:54.828 "num_base_bdevs_discovered": 3, 00:09:54.828 "num_base_bdevs_operational": 3, 00:09:54.828 "base_bdevs_list": [ 00:09:54.828 { 00:09:54.828 "name": "BaseBdev1", 00:09:54.828 "uuid": "ff0bc7e2-88d8-439e-ae05-bbdfcb6145c4", 00:09:54.828 "is_configured": true, 00:09:54.828 "data_offset": 0, 00:09:54.828 "data_size": 65536 00:09:54.828 }, 00:09:54.828 { 00:09:54.828 "name": "BaseBdev2", 00:09:54.828 "uuid": "f5d4ae47-b306-49c9-8e40-b82be7a801ed", 00:09:54.828 "is_configured": true, 00:09:54.828 "data_offset": 0, 00:09:54.828 "data_size": 65536 00:09:54.828 }, 00:09:54.828 { 00:09:54.828 "name": "BaseBdev3", 00:09:54.828 "uuid": "30893935-5681-4901-be12-a96a6f8bd90d", 00:09:54.828 "is_configured": true, 00:09:54.828 "data_offset": 0, 00:09:54.828 "data_size": 65536 00:09:54.828 } 00:09:54.828 ] 00:09:54.828 } 00:09:54.828 } 00:09:54.828 }' 00:09:54.828 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:54.828 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:54.828 BaseBdev2 00:09:54.828 BaseBdev3' 00:09:54.828 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.087 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:55.087 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.087 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:55.087 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.087 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.087 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.087 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.087 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.087 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.087 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.088 [2024-10-13 02:24:13.702181] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.088 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.346 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.346 "name": "Existed_Raid", 00:09:55.346 "uuid": "331f316c-62ca-4a56-9002-71ef4d17f438", 00:09:55.346 "strip_size_kb": 0, 00:09:55.346 "state": "online", 00:09:55.346 "raid_level": "raid1", 00:09:55.346 "superblock": false, 00:09:55.346 "num_base_bdevs": 3, 00:09:55.346 "num_base_bdevs_discovered": 2, 00:09:55.346 "num_base_bdevs_operational": 2, 00:09:55.346 "base_bdevs_list": [ 00:09:55.346 { 00:09:55.346 "name": null, 00:09:55.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.346 "is_configured": false, 00:09:55.346 "data_offset": 0, 00:09:55.346 "data_size": 65536 00:09:55.346 }, 00:09:55.346 { 00:09:55.346 "name": "BaseBdev2", 00:09:55.346 "uuid": "f5d4ae47-b306-49c9-8e40-b82be7a801ed", 00:09:55.346 "is_configured": true, 00:09:55.346 "data_offset": 0, 00:09:55.346 "data_size": 65536 00:09:55.346 }, 00:09:55.346 { 00:09:55.346 "name": "BaseBdev3", 00:09:55.346 "uuid": "30893935-5681-4901-be12-a96a6f8bd90d", 00:09:55.346 "is_configured": true, 00:09:55.346 "data_offset": 0, 00:09:55.346 "data_size": 65536 00:09:55.346 } 00:09:55.346 ] 00:09:55.346 }' 00:09:55.346 02:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.346 02:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.604 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:55.604 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:55.604 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:55.604 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.605 [2024-10-13 02:24:14.172784] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.605 [2024-10-13 02:24:14.240176] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:55.605 [2024-10-13 02:24:14.240370] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.605 [2024-10-13 02:24:14.252262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.605 [2024-10-13 02:24:14.252383] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.605 [2024-10-13 02:24:14.252431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:55.605 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.884 BaseBdev2 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.884 [ 00:09:55.884 { 00:09:55.884 "name": "BaseBdev2", 00:09:55.884 "aliases": [ 00:09:55.884 "fe6f3e3c-32b0-4378-b808-6cd1d940a4ff" 00:09:55.884 ], 00:09:55.884 "product_name": "Malloc disk", 00:09:55.884 "block_size": 512, 00:09:55.884 "num_blocks": 65536, 00:09:55.884 "uuid": "fe6f3e3c-32b0-4378-b808-6cd1d940a4ff", 00:09:55.884 "assigned_rate_limits": { 00:09:55.884 "rw_ios_per_sec": 0, 00:09:55.884 "rw_mbytes_per_sec": 0, 00:09:55.884 "r_mbytes_per_sec": 0, 00:09:55.884 "w_mbytes_per_sec": 0 00:09:55.884 }, 00:09:55.884 "claimed": false, 00:09:55.884 "zoned": false, 00:09:55.884 "supported_io_types": { 00:09:55.884 "read": true, 00:09:55.884 "write": true, 00:09:55.884 "unmap": true, 00:09:55.884 "flush": true, 00:09:55.884 "reset": true, 00:09:55.884 "nvme_admin": false, 00:09:55.884 "nvme_io": false, 00:09:55.884 "nvme_io_md": false, 00:09:55.884 "write_zeroes": true, 00:09:55.884 "zcopy": true, 00:09:55.884 "get_zone_info": false, 00:09:55.884 "zone_management": false, 00:09:55.884 "zone_append": false, 00:09:55.884 "compare": false, 00:09:55.884 "compare_and_write": false, 00:09:55.884 "abort": true, 00:09:55.884 "seek_hole": false, 00:09:55.884 "seek_data": false, 00:09:55.884 "copy": true, 00:09:55.884 "nvme_iov_md": false 00:09:55.884 }, 00:09:55.884 "memory_domains": [ 00:09:55.884 { 00:09:55.884 "dma_device_id": "system", 00:09:55.884 "dma_device_type": 1 00:09:55.884 }, 00:09:55.884 { 00:09:55.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.884 "dma_device_type": 2 00:09:55.884 } 00:09:55.884 ], 00:09:55.884 "driver_specific": {} 00:09:55.884 } 00:09:55.884 ] 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.884 BaseBdev3 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.884 [ 00:09:55.884 { 00:09:55.884 "name": "BaseBdev3", 00:09:55.884 "aliases": [ 00:09:55.884 "5c6e5ec5-e714-468b-ad81-65af804d5251" 00:09:55.884 ], 00:09:55.884 "product_name": "Malloc disk", 00:09:55.884 "block_size": 512, 00:09:55.884 "num_blocks": 65536, 00:09:55.884 "uuid": "5c6e5ec5-e714-468b-ad81-65af804d5251", 00:09:55.884 "assigned_rate_limits": { 00:09:55.884 "rw_ios_per_sec": 0, 00:09:55.884 "rw_mbytes_per_sec": 0, 00:09:55.884 "r_mbytes_per_sec": 0, 00:09:55.884 "w_mbytes_per_sec": 0 00:09:55.884 }, 00:09:55.884 "claimed": false, 00:09:55.884 "zoned": false, 00:09:55.884 "supported_io_types": { 00:09:55.884 "read": true, 00:09:55.884 "write": true, 00:09:55.884 "unmap": true, 00:09:55.884 "flush": true, 00:09:55.884 "reset": true, 00:09:55.884 "nvme_admin": false, 00:09:55.884 "nvme_io": false, 00:09:55.884 "nvme_io_md": false, 00:09:55.884 "write_zeroes": true, 00:09:55.884 "zcopy": true, 00:09:55.884 "get_zone_info": false, 00:09:55.884 "zone_management": false, 00:09:55.884 "zone_append": false, 00:09:55.884 "compare": false, 00:09:55.884 "compare_and_write": false, 00:09:55.884 "abort": true, 00:09:55.884 "seek_hole": false, 00:09:55.884 "seek_data": false, 00:09:55.884 "copy": true, 00:09:55.884 "nvme_iov_md": false 00:09:55.884 }, 00:09:55.884 "memory_domains": [ 00:09:55.884 { 00:09:55.884 "dma_device_id": "system", 00:09:55.884 "dma_device_type": 1 00:09:55.884 }, 00:09:55.884 { 00:09:55.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.884 "dma_device_type": 2 00:09:55.884 } 00:09:55.884 ], 00:09:55.884 "driver_specific": {} 00:09:55.884 } 00:09:55.884 ] 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.884 [2024-10-13 02:24:14.431880] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:55.884 [2024-10-13 02:24:14.432037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:55.884 [2024-10-13 02:24:14.432078] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.884 [2024-10-13 02:24:14.433914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.884 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.885 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:55.885 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.885 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.885 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.885 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.885 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.885 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.885 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.885 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.885 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.885 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.885 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.885 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.885 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.885 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.885 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.885 "name": "Existed_Raid", 00:09:55.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.885 "strip_size_kb": 0, 00:09:55.885 "state": "configuring", 00:09:55.885 "raid_level": "raid1", 00:09:55.885 "superblock": false, 00:09:55.885 "num_base_bdevs": 3, 00:09:55.885 "num_base_bdevs_discovered": 2, 00:09:55.885 "num_base_bdevs_operational": 3, 00:09:55.885 "base_bdevs_list": [ 00:09:55.885 { 00:09:55.885 "name": "BaseBdev1", 00:09:55.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.885 "is_configured": false, 00:09:55.885 "data_offset": 0, 00:09:55.885 "data_size": 0 00:09:55.885 }, 00:09:55.885 { 00:09:55.885 "name": "BaseBdev2", 00:09:55.885 "uuid": "fe6f3e3c-32b0-4378-b808-6cd1d940a4ff", 00:09:55.885 "is_configured": true, 00:09:55.885 "data_offset": 0, 00:09:55.885 "data_size": 65536 00:09:55.885 }, 00:09:55.885 { 00:09:55.885 "name": "BaseBdev3", 00:09:55.885 "uuid": "5c6e5ec5-e714-468b-ad81-65af804d5251", 00:09:55.885 "is_configured": true, 00:09:55.885 "data_offset": 0, 00:09:55.885 "data_size": 65536 00:09:55.885 } 00:09:55.885 ] 00:09:55.885 }' 00:09:55.885 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.885 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.457 [2024-10-13 02:24:14.907158] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.457 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.457 "name": "Existed_Raid", 00:09:56.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.457 "strip_size_kb": 0, 00:09:56.457 "state": "configuring", 00:09:56.457 "raid_level": "raid1", 00:09:56.457 "superblock": false, 00:09:56.457 "num_base_bdevs": 3, 00:09:56.457 "num_base_bdevs_discovered": 1, 00:09:56.457 "num_base_bdevs_operational": 3, 00:09:56.457 "base_bdevs_list": [ 00:09:56.457 { 00:09:56.457 "name": "BaseBdev1", 00:09:56.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.457 "is_configured": false, 00:09:56.457 "data_offset": 0, 00:09:56.457 "data_size": 0 00:09:56.457 }, 00:09:56.457 { 00:09:56.457 "name": null, 00:09:56.457 "uuid": "fe6f3e3c-32b0-4378-b808-6cd1d940a4ff", 00:09:56.457 "is_configured": false, 00:09:56.457 "data_offset": 0, 00:09:56.457 "data_size": 65536 00:09:56.457 }, 00:09:56.457 { 00:09:56.457 "name": "BaseBdev3", 00:09:56.457 "uuid": "5c6e5ec5-e714-468b-ad81-65af804d5251", 00:09:56.457 "is_configured": true, 00:09:56.457 "data_offset": 0, 00:09:56.457 "data_size": 65536 00:09:56.457 } 00:09:56.457 ] 00:09:56.458 }' 00:09:56.458 02:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.458 02:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.722 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:56.722 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.722 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.722 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.722 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.722 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:56.722 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:56.722 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.722 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.722 [2024-10-13 02:24:15.401313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.722 BaseBdev1 00:09:56.980 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.980 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:56.980 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:56.980 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.980 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:56.980 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.980 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.980 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.980 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.980 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.980 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.980 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.981 [ 00:09:56.981 { 00:09:56.981 "name": "BaseBdev1", 00:09:56.981 "aliases": [ 00:09:56.981 "87715380-a5f8-4d3f-8433-2e1175cc38dc" 00:09:56.981 ], 00:09:56.981 "product_name": "Malloc disk", 00:09:56.981 "block_size": 512, 00:09:56.981 "num_blocks": 65536, 00:09:56.981 "uuid": "87715380-a5f8-4d3f-8433-2e1175cc38dc", 00:09:56.981 "assigned_rate_limits": { 00:09:56.981 "rw_ios_per_sec": 0, 00:09:56.981 "rw_mbytes_per_sec": 0, 00:09:56.981 "r_mbytes_per_sec": 0, 00:09:56.981 "w_mbytes_per_sec": 0 00:09:56.981 }, 00:09:56.981 "claimed": true, 00:09:56.981 "claim_type": "exclusive_write", 00:09:56.981 "zoned": false, 00:09:56.981 "supported_io_types": { 00:09:56.981 "read": true, 00:09:56.981 "write": true, 00:09:56.981 "unmap": true, 00:09:56.981 "flush": true, 00:09:56.981 "reset": true, 00:09:56.981 "nvme_admin": false, 00:09:56.981 "nvme_io": false, 00:09:56.981 "nvme_io_md": false, 00:09:56.981 "write_zeroes": true, 00:09:56.981 "zcopy": true, 00:09:56.981 "get_zone_info": false, 00:09:56.981 "zone_management": false, 00:09:56.981 "zone_append": false, 00:09:56.981 "compare": false, 00:09:56.981 "compare_and_write": false, 00:09:56.981 "abort": true, 00:09:56.981 "seek_hole": false, 00:09:56.981 "seek_data": false, 00:09:56.981 "copy": true, 00:09:56.981 "nvme_iov_md": false 00:09:56.981 }, 00:09:56.981 "memory_domains": [ 00:09:56.981 { 00:09:56.981 "dma_device_id": "system", 00:09:56.981 "dma_device_type": 1 00:09:56.981 }, 00:09:56.981 { 00:09:56.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.981 "dma_device_type": 2 00:09:56.981 } 00:09:56.981 ], 00:09:56.981 "driver_specific": {} 00:09:56.981 } 00:09:56.981 ] 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.981 "name": "Existed_Raid", 00:09:56.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.981 "strip_size_kb": 0, 00:09:56.981 "state": "configuring", 00:09:56.981 "raid_level": "raid1", 00:09:56.981 "superblock": false, 00:09:56.981 "num_base_bdevs": 3, 00:09:56.981 "num_base_bdevs_discovered": 2, 00:09:56.981 "num_base_bdevs_operational": 3, 00:09:56.981 "base_bdevs_list": [ 00:09:56.981 { 00:09:56.981 "name": "BaseBdev1", 00:09:56.981 "uuid": "87715380-a5f8-4d3f-8433-2e1175cc38dc", 00:09:56.981 "is_configured": true, 00:09:56.981 "data_offset": 0, 00:09:56.981 "data_size": 65536 00:09:56.981 }, 00:09:56.981 { 00:09:56.981 "name": null, 00:09:56.981 "uuid": "fe6f3e3c-32b0-4378-b808-6cd1d940a4ff", 00:09:56.981 "is_configured": false, 00:09:56.981 "data_offset": 0, 00:09:56.981 "data_size": 65536 00:09:56.981 }, 00:09:56.981 { 00:09:56.981 "name": "BaseBdev3", 00:09:56.981 "uuid": "5c6e5ec5-e714-468b-ad81-65af804d5251", 00:09:56.981 "is_configured": true, 00:09:56.981 "data_offset": 0, 00:09:56.981 "data_size": 65536 00:09:56.981 } 00:09:56.981 ] 00:09:56.981 }' 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.981 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.241 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.241 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:57.241 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.241 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.596 [2024-10-13 02:24:15.956429] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.596 02:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.596 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.596 "name": "Existed_Raid", 00:09:58.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.596 "strip_size_kb": 0, 00:09:58.596 "state": "configuring", 00:09:58.596 "raid_level": "raid1", 00:09:58.596 "superblock": false, 00:09:58.596 "num_base_bdevs": 3, 00:09:58.596 "num_base_bdevs_discovered": 1, 00:09:58.596 "num_base_bdevs_operational": 3, 00:09:58.596 "base_bdevs_list": [ 00:09:58.596 { 00:09:58.596 "name": "BaseBdev1", 00:09:58.596 "uuid": "87715380-a5f8-4d3f-8433-2e1175cc38dc", 00:09:58.596 "is_configured": true, 00:09:58.596 "data_offset": 0, 00:09:58.596 "data_size": 65536 00:09:58.596 }, 00:09:58.596 { 00:09:58.596 "name": null, 00:09:58.596 "uuid": "fe6f3e3c-32b0-4378-b808-6cd1d940a4ff", 00:09:58.596 "is_configured": false, 00:09:58.596 "data_offset": 0, 00:09:58.596 "data_size": 65536 00:09:58.597 }, 00:09:58.597 { 00:09:58.597 "name": null, 00:09:58.597 "uuid": "5c6e5ec5-e714-468b-ad81-65af804d5251", 00:09:58.597 "is_configured": false, 00:09:58.597 "data_offset": 0, 00:09:58.597 "data_size": 65536 00:09:58.597 } 00:09:58.597 ] 00:09:58.597 }' 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.597 [2024-10-13 02:24:16.383740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.597 "name": "Existed_Raid", 00:09:58.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.597 "strip_size_kb": 0, 00:09:58.597 "state": "configuring", 00:09:58.597 "raid_level": "raid1", 00:09:58.597 "superblock": false, 00:09:58.597 "num_base_bdevs": 3, 00:09:58.597 "num_base_bdevs_discovered": 2, 00:09:58.597 "num_base_bdevs_operational": 3, 00:09:58.597 "base_bdevs_list": [ 00:09:58.597 { 00:09:58.597 "name": "BaseBdev1", 00:09:58.597 "uuid": "87715380-a5f8-4d3f-8433-2e1175cc38dc", 00:09:58.597 "is_configured": true, 00:09:58.597 "data_offset": 0, 00:09:58.597 "data_size": 65536 00:09:58.597 }, 00:09:58.597 { 00:09:58.597 "name": null, 00:09:58.597 "uuid": "fe6f3e3c-32b0-4378-b808-6cd1d940a4ff", 00:09:58.597 "is_configured": false, 00:09:58.597 "data_offset": 0, 00:09:58.597 "data_size": 65536 00:09:58.597 }, 00:09:58.597 { 00:09:58.597 "name": "BaseBdev3", 00:09:58.597 "uuid": "5c6e5ec5-e714-468b-ad81-65af804d5251", 00:09:58.597 "is_configured": true, 00:09:58.597 "data_offset": 0, 00:09:58.597 "data_size": 65536 00:09:58.597 } 00:09:58.597 ] 00:09:58.597 }' 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.597 [2024-10-13 02:24:16.907151] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.597 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.597 "name": "Existed_Raid", 00:09:58.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.597 "strip_size_kb": 0, 00:09:58.597 "state": "configuring", 00:09:58.597 "raid_level": "raid1", 00:09:58.597 "superblock": false, 00:09:58.597 "num_base_bdevs": 3, 00:09:58.597 "num_base_bdevs_discovered": 1, 00:09:58.597 "num_base_bdevs_operational": 3, 00:09:58.597 "base_bdevs_list": [ 00:09:58.597 { 00:09:58.597 "name": null, 00:09:58.598 "uuid": "87715380-a5f8-4d3f-8433-2e1175cc38dc", 00:09:58.598 "is_configured": false, 00:09:58.598 "data_offset": 0, 00:09:58.598 "data_size": 65536 00:09:58.598 }, 00:09:58.598 { 00:09:58.598 "name": null, 00:09:58.598 "uuid": "fe6f3e3c-32b0-4378-b808-6cd1d940a4ff", 00:09:58.598 "is_configured": false, 00:09:58.598 "data_offset": 0, 00:09:58.598 "data_size": 65536 00:09:58.598 }, 00:09:58.598 { 00:09:58.598 "name": "BaseBdev3", 00:09:58.598 "uuid": "5c6e5ec5-e714-468b-ad81-65af804d5251", 00:09:58.598 "is_configured": true, 00:09:58.598 "data_offset": 0, 00:09:58.598 "data_size": 65536 00:09:58.598 } 00:09:58.598 ] 00:09:58.598 }' 00:09:58.598 02:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.598 02:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.856 [2024-10-13 02:24:17.412924] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.856 "name": "Existed_Raid", 00:09:58.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.856 "strip_size_kb": 0, 00:09:58.856 "state": "configuring", 00:09:58.856 "raid_level": "raid1", 00:09:58.856 "superblock": false, 00:09:58.856 "num_base_bdevs": 3, 00:09:58.856 "num_base_bdevs_discovered": 2, 00:09:58.856 "num_base_bdevs_operational": 3, 00:09:58.856 "base_bdevs_list": [ 00:09:58.856 { 00:09:58.856 "name": null, 00:09:58.856 "uuid": "87715380-a5f8-4d3f-8433-2e1175cc38dc", 00:09:58.856 "is_configured": false, 00:09:58.856 "data_offset": 0, 00:09:58.856 "data_size": 65536 00:09:58.856 }, 00:09:58.856 { 00:09:58.856 "name": "BaseBdev2", 00:09:58.856 "uuid": "fe6f3e3c-32b0-4378-b808-6cd1d940a4ff", 00:09:58.856 "is_configured": true, 00:09:58.856 "data_offset": 0, 00:09:58.856 "data_size": 65536 00:09:58.856 }, 00:09:58.856 { 00:09:58.856 "name": "BaseBdev3", 00:09:58.856 "uuid": "5c6e5ec5-e714-468b-ad81-65af804d5251", 00:09:58.856 "is_configured": true, 00:09:58.856 "data_offset": 0, 00:09:58.856 "data_size": 65536 00:09:58.856 } 00:09:58.856 ] 00:09:58.856 }' 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.856 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 87715380-a5f8-4d3f-8433-2e1175cc38dc 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.424 [2024-10-13 02:24:17.910983] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:59.424 [2024-10-13 02:24:17.911131] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:59.424 [2024-10-13 02:24:17.911157] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:59.424 [2024-10-13 02:24:17.911425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:59.424 [2024-10-13 02:24:17.911578] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:59.424 [2024-10-13 02:24:17.911620] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:59.424 [2024-10-13 02:24:17.911833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.424 NewBaseBdev 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.424 [ 00:09:59.424 { 00:09:59.424 "name": "NewBaseBdev", 00:09:59.424 "aliases": [ 00:09:59.424 "87715380-a5f8-4d3f-8433-2e1175cc38dc" 00:09:59.424 ], 00:09:59.424 "product_name": "Malloc disk", 00:09:59.424 "block_size": 512, 00:09:59.424 "num_blocks": 65536, 00:09:59.424 "uuid": "87715380-a5f8-4d3f-8433-2e1175cc38dc", 00:09:59.424 "assigned_rate_limits": { 00:09:59.424 "rw_ios_per_sec": 0, 00:09:59.424 "rw_mbytes_per_sec": 0, 00:09:59.424 "r_mbytes_per_sec": 0, 00:09:59.424 "w_mbytes_per_sec": 0 00:09:59.424 }, 00:09:59.424 "claimed": true, 00:09:59.424 "claim_type": "exclusive_write", 00:09:59.424 "zoned": false, 00:09:59.424 "supported_io_types": { 00:09:59.424 "read": true, 00:09:59.424 "write": true, 00:09:59.424 "unmap": true, 00:09:59.424 "flush": true, 00:09:59.424 "reset": true, 00:09:59.424 "nvme_admin": false, 00:09:59.424 "nvme_io": false, 00:09:59.424 "nvme_io_md": false, 00:09:59.424 "write_zeroes": true, 00:09:59.424 "zcopy": true, 00:09:59.424 "get_zone_info": false, 00:09:59.424 "zone_management": false, 00:09:59.424 "zone_append": false, 00:09:59.424 "compare": false, 00:09:59.424 "compare_and_write": false, 00:09:59.424 "abort": true, 00:09:59.424 "seek_hole": false, 00:09:59.424 "seek_data": false, 00:09:59.424 "copy": true, 00:09:59.424 "nvme_iov_md": false 00:09:59.424 }, 00:09:59.424 "memory_domains": [ 00:09:59.424 { 00:09:59.424 "dma_device_id": "system", 00:09:59.424 "dma_device_type": 1 00:09:59.424 }, 00:09:59.424 { 00:09:59.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.424 "dma_device_type": 2 00:09:59.424 } 00:09:59.424 ], 00:09:59.424 "driver_specific": {} 00:09:59.424 } 00:09:59.424 ] 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:59.424 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.425 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.425 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.425 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.425 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.425 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.425 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.425 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.425 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.425 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.425 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.425 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.425 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.425 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.425 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.425 "name": "Existed_Raid", 00:09:59.425 "uuid": "810fdfb3-2143-4c58-9949-e3c6356f902e", 00:09:59.425 "strip_size_kb": 0, 00:09:59.425 "state": "online", 00:09:59.425 "raid_level": "raid1", 00:09:59.425 "superblock": false, 00:09:59.425 "num_base_bdevs": 3, 00:09:59.425 "num_base_bdevs_discovered": 3, 00:09:59.425 "num_base_bdevs_operational": 3, 00:09:59.425 "base_bdevs_list": [ 00:09:59.425 { 00:09:59.425 "name": "NewBaseBdev", 00:09:59.425 "uuid": "87715380-a5f8-4d3f-8433-2e1175cc38dc", 00:09:59.425 "is_configured": true, 00:09:59.425 "data_offset": 0, 00:09:59.425 "data_size": 65536 00:09:59.425 }, 00:09:59.425 { 00:09:59.425 "name": "BaseBdev2", 00:09:59.425 "uuid": "fe6f3e3c-32b0-4378-b808-6cd1d940a4ff", 00:09:59.425 "is_configured": true, 00:09:59.425 "data_offset": 0, 00:09:59.425 "data_size": 65536 00:09:59.425 }, 00:09:59.425 { 00:09:59.425 "name": "BaseBdev3", 00:09:59.425 "uuid": "5c6e5ec5-e714-468b-ad81-65af804d5251", 00:09:59.425 "is_configured": true, 00:09:59.425 "data_offset": 0, 00:09:59.425 "data_size": 65536 00:09:59.425 } 00:09:59.425 ] 00:09:59.425 }' 00:09:59.425 02:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.425 02:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:59.994 [2024-10-13 02:24:18.382698] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.994 "name": "Existed_Raid", 00:09:59.994 "aliases": [ 00:09:59.994 "810fdfb3-2143-4c58-9949-e3c6356f902e" 00:09:59.994 ], 00:09:59.994 "product_name": "Raid Volume", 00:09:59.994 "block_size": 512, 00:09:59.994 "num_blocks": 65536, 00:09:59.994 "uuid": "810fdfb3-2143-4c58-9949-e3c6356f902e", 00:09:59.994 "assigned_rate_limits": { 00:09:59.994 "rw_ios_per_sec": 0, 00:09:59.994 "rw_mbytes_per_sec": 0, 00:09:59.994 "r_mbytes_per_sec": 0, 00:09:59.994 "w_mbytes_per_sec": 0 00:09:59.994 }, 00:09:59.994 "claimed": false, 00:09:59.994 "zoned": false, 00:09:59.994 "supported_io_types": { 00:09:59.994 "read": true, 00:09:59.994 "write": true, 00:09:59.994 "unmap": false, 00:09:59.994 "flush": false, 00:09:59.994 "reset": true, 00:09:59.994 "nvme_admin": false, 00:09:59.994 "nvme_io": false, 00:09:59.994 "nvme_io_md": false, 00:09:59.994 "write_zeroes": true, 00:09:59.994 "zcopy": false, 00:09:59.994 "get_zone_info": false, 00:09:59.994 "zone_management": false, 00:09:59.994 "zone_append": false, 00:09:59.994 "compare": false, 00:09:59.994 "compare_and_write": false, 00:09:59.994 "abort": false, 00:09:59.994 "seek_hole": false, 00:09:59.994 "seek_data": false, 00:09:59.994 "copy": false, 00:09:59.994 "nvme_iov_md": false 00:09:59.994 }, 00:09:59.994 "memory_domains": [ 00:09:59.994 { 00:09:59.994 "dma_device_id": "system", 00:09:59.994 "dma_device_type": 1 00:09:59.994 }, 00:09:59.994 { 00:09:59.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.994 "dma_device_type": 2 00:09:59.994 }, 00:09:59.994 { 00:09:59.994 "dma_device_id": "system", 00:09:59.994 "dma_device_type": 1 00:09:59.994 }, 00:09:59.994 { 00:09:59.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.994 "dma_device_type": 2 00:09:59.994 }, 00:09:59.994 { 00:09:59.994 "dma_device_id": "system", 00:09:59.994 "dma_device_type": 1 00:09:59.994 }, 00:09:59.994 { 00:09:59.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.994 "dma_device_type": 2 00:09:59.994 } 00:09:59.994 ], 00:09:59.994 "driver_specific": { 00:09:59.994 "raid": { 00:09:59.994 "uuid": "810fdfb3-2143-4c58-9949-e3c6356f902e", 00:09:59.994 "strip_size_kb": 0, 00:09:59.994 "state": "online", 00:09:59.994 "raid_level": "raid1", 00:09:59.994 "superblock": false, 00:09:59.994 "num_base_bdevs": 3, 00:09:59.994 "num_base_bdevs_discovered": 3, 00:09:59.994 "num_base_bdevs_operational": 3, 00:09:59.994 "base_bdevs_list": [ 00:09:59.994 { 00:09:59.994 "name": "NewBaseBdev", 00:09:59.994 "uuid": "87715380-a5f8-4d3f-8433-2e1175cc38dc", 00:09:59.994 "is_configured": true, 00:09:59.994 "data_offset": 0, 00:09:59.994 "data_size": 65536 00:09:59.994 }, 00:09:59.994 { 00:09:59.994 "name": "BaseBdev2", 00:09:59.994 "uuid": "fe6f3e3c-32b0-4378-b808-6cd1d940a4ff", 00:09:59.994 "is_configured": true, 00:09:59.994 "data_offset": 0, 00:09:59.994 "data_size": 65536 00:09:59.994 }, 00:09:59.994 { 00:09:59.994 "name": "BaseBdev3", 00:09:59.994 "uuid": "5c6e5ec5-e714-468b-ad81-65af804d5251", 00:09:59.994 "is_configured": true, 00:09:59.994 "data_offset": 0, 00:09:59.994 "data_size": 65536 00:09:59.994 } 00:09:59.994 ] 00:09:59.994 } 00:09:59.994 } 00:09:59.994 }' 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:59.994 BaseBdev2 00:09:59.994 BaseBdev3' 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.994 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.994 [2024-10-13 02:24:18.673928] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.994 [2024-10-13 02:24:18.674024] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.994 [2024-10-13 02:24:18.674116] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.994 [2024-10-13 02:24:18.674389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.994 [2024-10-13 02:24:18.674443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:00.254 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.254 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78339 00:10:00.254 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78339 ']' 00:10:00.254 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78339 00:10:00.254 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:00.254 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:00.254 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78339 00:10:00.254 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:00.254 killing process with pid 78339 00:10:00.254 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:00.254 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78339' 00:10:00.254 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78339 00:10:00.254 [2024-10-13 02:24:18.722691] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:00.254 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78339 00:10:00.254 [2024-10-13 02:24:18.754279] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:00.513 02:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:00.513 00:10:00.513 real 0m8.906s 00:10:00.514 user 0m15.129s 00:10:00.514 sys 0m1.881s 00:10:00.514 02:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.514 ************************************ 00:10:00.514 END TEST raid_state_function_test 00:10:00.514 ************************************ 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.514 02:24:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:00.514 02:24:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:00.514 02:24:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.514 02:24:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:00.514 ************************************ 00:10:00.514 START TEST raid_state_function_test_sb 00:10:00.514 ************************************ 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78944 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78944' 00:10:00.514 Process raid pid: 78944 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78944 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 78944 ']' 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.514 02:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.514 [2024-10-13 02:24:19.163827] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:00.514 [2024-10-13 02:24:19.164026] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.774 [2024-10-13 02:24:19.303442] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.774 [2024-10-13 02:24:19.351835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.774 [2024-10-13 02:24:19.395453] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.774 [2024-10-13 02:24:19.395493] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.342 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:01.342 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:01.342 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:01.342 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.342 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.342 [2024-10-13 02:24:20.017250] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.342 [2024-10-13 02:24:20.017400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.342 [2024-10-13 02:24:20.017430] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.342 [2024-10-13 02:24:20.017455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.342 [2024-10-13 02:24:20.017473] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:01.342 [2024-10-13 02:24:20.017495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:01.342 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.342 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:01.342 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.342 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.342 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.601 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.601 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.601 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.601 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.601 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.601 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.601 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.601 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.601 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.601 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.601 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.601 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.601 "name": "Existed_Raid", 00:10:01.601 "uuid": "14b2607d-7b43-4da2-ab78-5b65acf5fd8d", 00:10:01.601 "strip_size_kb": 0, 00:10:01.601 "state": "configuring", 00:10:01.601 "raid_level": "raid1", 00:10:01.601 "superblock": true, 00:10:01.601 "num_base_bdevs": 3, 00:10:01.601 "num_base_bdevs_discovered": 0, 00:10:01.601 "num_base_bdevs_operational": 3, 00:10:01.601 "base_bdevs_list": [ 00:10:01.601 { 00:10:01.601 "name": "BaseBdev1", 00:10:01.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.601 "is_configured": false, 00:10:01.601 "data_offset": 0, 00:10:01.601 "data_size": 0 00:10:01.601 }, 00:10:01.601 { 00:10:01.601 "name": "BaseBdev2", 00:10:01.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.601 "is_configured": false, 00:10:01.601 "data_offset": 0, 00:10:01.601 "data_size": 0 00:10:01.601 }, 00:10:01.601 { 00:10:01.601 "name": "BaseBdev3", 00:10:01.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.601 "is_configured": false, 00:10:01.601 "data_offset": 0, 00:10:01.601 "data_size": 0 00:10:01.601 } 00:10:01.601 ] 00:10:01.601 }' 00:10:01.601 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.602 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.861 [2024-10-13 02:24:20.484355] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.861 [2024-10-13 02:24:20.484495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.861 [2024-10-13 02:24:20.496342] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.861 [2024-10-13 02:24:20.496430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.861 [2024-10-13 02:24:20.496456] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.861 [2024-10-13 02:24:20.496468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.861 [2024-10-13 02:24:20.496475] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:01.861 [2024-10-13 02:24:20.496484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.861 [2024-10-13 02:24:20.516992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.861 BaseBdev1 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.861 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.861 [ 00:10:01.861 { 00:10:01.861 "name": "BaseBdev1", 00:10:01.861 "aliases": [ 00:10:02.120 "f5990cd5-b3aa-4e46-9e7b-eebddcb7c3fa" 00:10:02.120 ], 00:10:02.120 "product_name": "Malloc disk", 00:10:02.120 "block_size": 512, 00:10:02.120 "num_blocks": 65536, 00:10:02.120 "uuid": "f5990cd5-b3aa-4e46-9e7b-eebddcb7c3fa", 00:10:02.120 "assigned_rate_limits": { 00:10:02.120 "rw_ios_per_sec": 0, 00:10:02.120 "rw_mbytes_per_sec": 0, 00:10:02.120 "r_mbytes_per_sec": 0, 00:10:02.120 "w_mbytes_per_sec": 0 00:10:02.120 }, 00:10:02.120 "claimed": true, 00:10:02.120 "claim_type": "exclusive_write", 00:10:02.120 "zoned": false, 00:10:02.120 "supported_io_types": { 00:10:02.120 "read": true, 00:10:02.120 "write": true, 00:10:02.120 "unmap": true, 00:10:02.120 "flush": true, 00:10:02.120 "reset": true, 00:10:02.120 "nvme_admin": false, 00:10:02.120 "nvme_io": false, 00:10:02.120 "nvme_io_md": false, 00:10:02.120 "write_zeroes": true, 00:10:02.120 "zcopy": true, 00:10:02.120 "get_zone_info": false, 00:10:02.120 "zone_management": false, 00:10:02.120 "zone_append": false, 00:10:02.120 "compare": false, 00:10:02.120 "compare_and_write": false, 00:10:02.120 "abort": true, 00:10:02.120 "seek_hole": false, 00:10:02.120 "seek_data": false, 00:10:02.120 "copy": true, 00:10:02.120 "nvme_iov_md": false 00:10:02.120 }, 00:10:02.120 "memory_domains": [ 00:10:02.120 { 00:10:02.120 "dma_device_id": "system", 00:10:02.120 "dma_device_type": 1 00:10:02.120 }, 00:10:02.120 { 00:10:02.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.120 "dma_device_type": 2 00:10:02.120 } 00:10:02.120 ], 00:10:02.120 "driver_specific": {} 00:10:02.120 } 00:10:02.120 ] 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.120 "name": "Existed_Raid", 00:10:02.120 "uuid": "57946db1-184e-4eb3-9cda-042be6a81f88", 00:10:02.120 "strip_size_kb": 0, 00:10:02.120 "state": "configuring", 00:10:02.120 "raid_level": "raid1", 00:10:02.120 "superblock": true, 00:10:02.120 "num_base_bdevs": 3, 00:10:02.120 "num_base_bdevs_discovered": 1, 00:10:02.120 "num_base_bdevs_operational": 3, 00:10:02.120 "base_bdevs_list": [ 00:10:02.120 { 00:10:02.120 "name": "BaseBdev1", 00:10:02.120 "uuid": "f5990cd5-b3aa-4e46-9e7b-eebddcb7c3fa", 00:10:02.120 "is_configured": true, 00:10:02.120 "data_offset": 2048, 00:10:02.120 "data_size": 63488 00:10:02.120 }, 00:10:02.120 { 00:10:02.120 "name": "BaseBdev2", 00:10:02.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.120 "is_configured": false, 00:10:02.120 "data_offset": 0, 00:10:02.120 "data_size": 0 00:10:02.120 }, 00:10:02.120 { 00:10:02.120 "name": "BaseBdev3", 00:10:02.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.120 "is_configured": false, 00:10:02.120 "data_offset": 0, 00:10:02.120 "data_size": 0 00:10:02.120 } 00:10:02.120 ] 00:10:02.120 }' 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.120 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.380 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.380 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.380 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.380 [2024-10-13 02:24:20.984399] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.380 [2024-10-13 02:24:20.984544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:02.380 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.380 02:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.380 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.380 02:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.380 [2024-10-13 02:24:20.996419] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.380 [2024-10-13 02:24:20.998340] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.380 [2024-10-13 02:24:20.998418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.380 [2024-10-13 02:24:20.998445] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.380 [2024-10-13 02:24:20.998469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.380 "name": "Existed_Raid", 00:10:02.380 "uuid": "613e743d-7429-4624-934c-2cd99261636b", 00:10:02.380 "strip_size_kb": 0, 00:10:02.380 "state": "configuring", 00:10:02.380 "raid_level": "raid1", 00:10:02.380 "superblock": true, 00:10:02.380 "num_base_bdevs": 3, 00:10:02.380 "num_base_bdevs_discovered": 1, 00:10:02.380 "num_base_bdevs_operational": 3, 00:10:02.380 "base_bdevs_list": [ 00:10:02.380 { 00:10:02.380 "name": "BaseBdev1", 00:10:02.380 "uuid": "f5990cd5-b3aa-4e46-9e7b-eebddcb7c3fa", 00:10:02.380 "is_configured": true, 00:10:02.380 "data_offset": 2048, 00:10:02.380 "data_size": 63488 00:10:02.380 }, 00:10:02.380 { 00:10:02.380 "name": "BaseBdev2", 00:10:02.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.380 "is_configured": false, 00:10:02.380 "data_offset": 0, 00:10:02.380 "data_size": 0 00:10:02.380 }, 00:10:02.380 { 00:10:02.380 "name": "BaseBdev3", 00:10:02.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.380 "is_configured": false, 00:10:02.380 "data_offset": 0, 00:10:02.380 "data_size": 0 00:10:02.380 } 00:10:02.380 ] 00:10:02.380 }' 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.380 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.948 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:02.948 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.948 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.948 [2024-10-13 02:24:21.416988] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.948 BaseBdev2 00:10:02.948 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.948 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:02.948 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:02.948 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:02.948 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:02.948 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:02.948 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:02.948 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:02.948 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.948 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.948 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.948 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:02.948 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.948 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.948 [ 00:10:02.948 { 00:10:02.948 "name": "BaseBdev2", 00:10:02.948 "aliases": [ 00:10:02.948 "1a4f8ab9-a535-436b-b8f1-5115a2fe1c5b" 00:10:02.948 ], 00:10:02.948 "product_name": "Malloc disk", 00:10:02.948 "block_size": 512, 00:10:02.948 "num_blocks": 65536, 00:10:02.948 "uuid": "1a4f8ab9-a535-436b-b8f1-5115a2fe1c5b", 00:10:02.948 "assigned_rate_limits": { 00:10:02.948 "rw_ios_per_sec": 0, 00:10:02.948 "rw_mbytes_per_sec": 0, 00:10:02.948 "r_mbytes_per_sec": 0, 00:10:02.948 "w_mbytes_per_sec": 0 00:10:02.948 }, 00:10:02.948 "claimed": true, 00:10:02.948 "claim_type": "exclusive_write", 00:10:02.948 "zoned": false, 00:10:02.949 "supported_io_types": { 00:10:02.949 "read": true, 00:10:02.949 "write": true, 00:10:02.949 "unmap": true, 00:10:02.949 "flush": true, 00:10:02.949 "reset": true, 00:10:02.949 "nvme_admin": false, 00:10:02.949 "nvme_io": false, 00:10:02.949 "nvme_io_md": false, 00:10:02.949 "write_zeroes": true, 00:10:02.949 "zcopy": true, 00:10:02.949 "get_zone_info": false, 00:10:02.949 "zone_management": false, 00:10:02.949 "zone_append": false, 00:10:02.949 "compare": false, 00:10:02.949 "compare_and_write": false, 00:10:02.949 "abort": true, 00:10:02.949 "seek_hole": false, 00:10:02.949 "seek_data": false, 00:10:02.949 "copy": true, 00:10:02.949 "nvme_iov_md": false 00:10:02.949 }, 00:10:02.949 "memory_domains": [ 00:10:02.949 { 00:10:02.949 "dma_device_id": "system", 00:10:02.949 "dma_device_type": 1 00:10:02.949 }, 00:10:02.949 { 00:10:02.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.949 "dma_device_type": 2 00:10:02.949 } 00:10:02.949 ], 00:10:02.949 "driver_specific": {} 00:10:02.949 } 00:10:02.949 ] 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.949 "name": "Existed_Raid", 00:10:02.949 "uuid": "613e743d-7429-4624-934c-2cd99261636b", 00:10:02.949 "strip_size_kb": 0, 00:10:02.949 "state": "configuring", 00:10:02.949 "raid_level": "raid1", 00:10:02.949 "superblock": true, 00:10:02.949 "num_base_bdevs": 3, 00:10:02.949 "num_base_bdevs_discovered": 2, 00:10:02.949 "num_base_bdevs_operational": 3, 00:10:02.949 "base_bdevs_list": [ 00:10:02.949 { 00:10:02.949 "name": "BaseBdev1", 00:10:02.949 "uuid": "f5990cd5-b3aa-4e46-9e7b-eebddcb7c3fa", 00:10:02.949 "is_configured": true, 00:10:02.949 "data_offset": 2048, 00:10:02.949 "data_size": 63488 00:10:02.949 }, 00:10:02.949 { 00:10:02.949 "name": "BaseBdev2", 00:10:02.949 "uuid": "1a4f8ab9-a535-436b-b8f1-5115a2fe1c5b", 00:10:02.949 "is_configured": true, 00:10:02.949 "data_offset": 2048, 00:10:02.949 "data_size": 63488 00:10:02.949 }, 00:10:02.949 { 00:10:02.949 "name": "BaseBdev3", 00:10:02.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.949 "is_configured": false, 00:10:02.949 "data_offset": 0, 00:10:02.949 "data_size": 0 00:10:02.949 } 00:10:02.949 ] 00:10:02.949 }' 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.949 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.209 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:03.209 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.209 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.209 [2024-10-13 02:24:21.887387] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:03.209 [2024-10-13 02:24:21.887703] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:03.209 [2024-10-13 02:24:21.887758] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:03.209 [2024-10-13 02:24:21.888074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:03.209 BaseBdev3 00:10:03.209 [2024-10-13 02:24:21.888254] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:03.209 [2024-10-13 02:24:21.888270] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:03.209 [2024-10-13 02:24:21.888390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.209 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.209 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:03.209 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:03.209 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:03.210 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:03.210 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.469 [ 00:10:03.469 { 00:10:03.469 "name": "BaseBdev3", 00:10:03.469 "aliases": [ 00:10:03.469 "e669eecb-976c-4ff8-b8ae-827ca62dd91b" 00:10:03.469 ], 00:10:03.469 "product_name": "Malloc disk", 00:10:03.469 "block_size": 512, 00:10:03.469 "num_blocks": 65536, 00:10:03.469 "uuid": "e669eecb-976c-4ff8-b8ae-827ca62dd91b", 00:10:03.469 "assigned_rate_limits": { 00:10:03.469 "rw_ios_per_sec": 0, 00:10:03.469 "rw_mbytes_per_sec": 0, 00:10:03.469 "r_mbytes_per_sec": 0, 00:10:03.469 "w_mbytes_per_sec": 0 00:10:03.469 }, 00:10:03.469 "claimed": true, 00:10:03.469 "claim_type": "exclusive_write", 00:10:03.469 "zoned": false, 00:10:03.469 "supported_io_types": { 00:10:03.469 "read": true, 00:10:03.469 "write": true, 00:10:03.469 "unmap": true, 00:10:03.469 "flush": true, 00:10:03.469 "reset": true, 00:10:03.469 "nvme_admin": false, 00:10:03.469 "nvme_io": false, 00:10:03.469 "nvme_io_md": false, 00:10:03.469 "write_zeroes": true, 00:10:03.469 "zcopy": true, 00:10:03.469 "get_zone_info": false, 00:10:03.469 "zone_management": false, 00:10:03.469 "zone_append": false, 00:10:03.469 "compare": false, 00:10:03.469 "compare_and_write": false, 00:10:03.469 "abort": true, 00:10:03.469 "seek_hole": false, 00:10:03.469 "seek_data": false, 00:10:03.469 "copy": true, 00:10:03.469 "nvme_iov_md": false 00:10:03.469 }, 00:10:03.469 "memory_domains": [ 00:10:03.469 { 00:10:03.469 "dma_device_id": "system", 00:10:03.469 "dma_device_type": 1 00:10:03.469 }, 00:10:03.469 { 00:10:03.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.469 "dma_device_type": 2 00:10:03.469 } 00:10:03.469 ], 00:10:03.469 "driver_specific": {} 00:10:03.469 } 00:10:03.469 ] 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.469 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.469 "name": "Existed_Raid", 00:10:03.469 "uuid": "613e743d-7429-4624-934c-2cd99261636b", 00:10:03.469 "strip_size_kb": 0, 00:10:03.469 "state": "online", 00:10:03.469 "raid_level": "raid1", 00:10:03.469 "superblock": true, 00:10:03.469 "num_base_bdevs": 3, 00:10:03.469 "num_base_bdevs_discovered": 3, 00:10:03.469 "num_base_bdevs_operational": 3, 00:10:03.469 "base_bdevs_list": [ 00:10:03.469 { 00:10:03.469 "name": "BaseBdev1", 00:10:03.469 "uuid": "f5990cd5-b3aa-4e46-9e7b-eebddcb7c3fa", 00:10:03.469 "is_configured": true, 00:10:03.469 "data_offset": 2048, 00:10:03.469 "data_size": 63488 00:10:03.469 }, 00:10:03.469 { 00:10:03.469 "name": "BaseBdev2", 00:10:03.469 "uuid": "1a4f8ab9-a535-436b-b8f1-5115a2fe1c5b", 00:10:03.469 "is_configured": true, 00:10:03.469 "data_offset": 2048, 00:10:03.469 "data_size": 63488 00:10:03.469 }, 00:10:03.469 { 00:10:03.469 "name": "BaseBdev3", 00:10:03.469 "uuid": "e669eecb-976c-4ff8-b8ae-827ca62dd91b", 00:10:03.469 "is_configured": true, 00:10:03.470 "data_offset": 2048, 00:10:03.470 "data_size": 63488 00:10:03.470 } 00:10:03.470 ] 00:10:03.470 }' 00:10:03.470 02:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.470 02:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.729 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:03.729 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:03.729 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:03.729 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:03.729 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:03.729 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:03.729 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:03.729 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.729 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.729 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.729 [2024-10-13 02:24:22.371001] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.729 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.729 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.729 "name": "Existed_Raid", 00:10:03.729 "aliases": [ 00:10:03.729 "613e743d-7429-4624-934c-2cd99261636b" 00:10:03.729 ], 00:10:03.729 "product_name": "Raid Volume", 00:10:03.729 "block_size": 512, 00:10:03.729 "num_blocks": 63488, 00:10:03.729 "uuid": "613e743d-7429-4624-934c-2cd99261636b", 00:10:03.729 "assigned_rate_limits": { 00:10:03.729 "rw_ios_per_sec": 0, 00:10:03.729 "rw_mbytes_per_sec": 0, 00:10:03.729 "r_mbytes_per_sec": 0, 00:10:03.729 "w_mbytes_per_sec": 0 00:10:03.729 }, 00:10:03.729 "claimed": false, 00:10:03.729 "zoned": false, 00:10:03.729 "supported_io_types": { 00:10:03.729 "read": true, 00:10:03.729 "write": true, 00:10:03.729 "unmap": false, 00:10:03.729 "flush": false, 00:10:03.729 "reset": true, 00:10:03.729 "nvme_admin": false, 00:10:03.729 "nvme_io": false, 00:10:03.729 "nvme_io_md": false, 00:10:03.729 "write_zeroes": true, 00:10:03.729 "zcopy": false, 00:10:03.729 "get_zone_info": false, 00:10:03.729 "zone_management": false, 00:10:03.729 "zone_append": false, 00:10:03.729 "compare": false, 00:10:03.729 "compare_and_write": false, 00:10:03.729 "abort": false, 00:10:03.729 "seek_hole": false, 00:10:03.729 "seek_data": false, 00:10:03.729 "copy": false, 00:10:03.729 "nvme_iov_md": false 00:10:03.729 }, 00:10:03.729 "memory_domains": [ 00:10:03.729 { 00:10:03.729 "dma_device_id": "system", 00:10:03.729 "dma_device_type": 1 00:10:03.729 }, 00:10:03.729 { 00:10:03.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.729 "dma_device_type": 2 00:10:03.729 }, 00:10:03.729 { 00:10:03.729 "dma_device_id": "system", 00:10:03.729 "dma_device_type": 1 00:10:03.729 }, 00:10:03.729 { 00:10:03.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.729 "dma_device_type": 2 00:10:03.729 }, 00:10:03.729 { 00:10:03.729 "dma_device_id": "system", 00:10:03.729 "dma_device_type": 1 00:10:03.729 }, 00:10:03.729 { 00:10:03.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.729 "dma_device_type": 2 00:10:03.729 } 00:10:03.729 ], 00:10:03.729 "driver_specific": { 00:10:03.729 "raid": { 00:10:03.729 "uuid": "613e743d-7429-4624-934c-2cd99261636b", 00:10:03.729 "strip_size_kb": 0, 00:10:03.729 "state": "online", 00:10:03.729 "raid_level": "raid1", 00:10:03.729 "superblock": true, 00:10:03.729 "num_base_bdevs": 3, 00:10:03.730 "num_base_bdevs_discovered": 3, 00:10:03.730 "num_base_bdevs_operational": 3, 00:10:03.730 "base_bdevs_list": [ 00:10:03.730 { 00:10:03.730 "name": "BaseBdev1", 00:10:03.730 "uuid": "f5990cd5-b3aa-4e46-9e7b-eebddcb7c3fa", 00:10:03.730 "is_configured": true, 00:10:03.730 "data_offset": 2048, 00:10:03.730 "data_size": 63488 00:10:03.730 }, 00:10:03.730 { 00:10:03.730 "name": "BaseBdev2", 00:10:03.730 "uuid": "1a4f8ab9-a535-436b-b8f1-5115a2fe1c5b", 00:10:03.730 "is_configured": true, 00:10:03.730 "data_offset": 2048, 00:10:03.730 "data_size": 63488 00:10:03.730 }, 00:10:03.730 { 00:10:03.730 "name": "BaseBdev3", 00:10:03.730 "uuid": "e669eecb-976c-4ff8-b8ae-827ca62dd91b", 00:10:03.730 "is_configured": true, 00:10:03.730 "data_offset": 2048, 00:10:03.730 "data_size": 63488 00:10:03.730 } 00:10:03.730 ] 00:10:03.730 } 00:10:03.730 } 00:10:03.730 }' 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:03.989 BaseBdev2 00:10:03.989 BaseBdev3' 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.989 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.989 [2024-10-13 02:24:22.670260] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:04.248 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.248 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:04.248 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:04.248 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:04.248 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:04.248 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:04.249 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:04.249 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.249 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.249 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.249 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.249 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:04.249 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.249 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.249 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.249 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.249 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.249 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.249 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.249 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.249 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.249 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.249 "name": "Existed_Raid", 00:10:04.249 "uuid": "613e743d-7429-4624-934c-2cd99261636b", 00:10:04.249 "strip_size_kb": 0, 00:10:04.249 "state": "online", 00:10:04.249 "raid_level": "raid1", 00:10:04.249 "superblock": true, 00:10:04.249 "num_base_bdevs": 3, 00:10:04.249 "num_base_bdevs_discovered": 2, 00:10:04.249 "num_base_bdevs_operational": 2, 00:10:04.249 "base_bdevs_list": [ 00:10:04.249 { 00:10:04.249 "name": null, 00:10:04.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.249 "is_configured": false, 00:10:04.249 "data_offset": 0, 00:10:04.249 "data_size": 63488 00:10:04.249 }, 00:10:04.249 { 00:10:04.249 "name": "BaseBdev2", 00:10:04.249 "uuid": "1a4f8ab9-a535-436b-b8f1-5115a2fe1c5b", 00:10:04.249 "is_configured": true, 00:10:04.249 "data_offset": 2048, 00:10:04.249 "data_size": 63488 00:10:04.249 }, 00:10:04.249 { 00:10:04.249 "name": "BaseBdev3", 00:10:04.249 "uuid": "e669eecb-976c-4ff8-b8ae-827ca62dd91b", 00:10:04.249 "is_configured": true, 00:10:04.249 "data_offset": 2048, 00:10:04.249 "data_size": 63488 00:10:04.249 } 00:10:04.249 ] 00:10:04.249 }' 00:10:04.249 02:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.249 02:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.527 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:04.527 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.527 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.527 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:04.527 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.527 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.527 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.527 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:04.527 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:04.527 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:04.527 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.527 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.527 [2024-10-13 02:24:23.184898] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.799 [2024-10-13 02:24:23.256050] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:04.799 [2024-10-13 02:24:23.256191] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.799 [2024-10-13 02:24:23.267776] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.799 [2024-10-13 02:24:23.267896] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.799 [2024-10-13 02:24:23.267967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.799 BaseBdev2 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:04.799 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.800 [ 00:10:04.800 { 00:10:04.800 "name": "BaseBdev2", 00:10:04.800 "aliases": [ 00:10:04.800 "256bd822-cdf7-4938-a0dd-a4eb115873e4" 00:10:04.800 ], 00:10:04.800 "product_name": "Malloc disk", 00:10:04.800 "block_size": 512, 00:10:04.800 "num_blocks": 65536, 00:10:04.800 "uuid": "256bd822-cdf7-4938-a0dd-a4eb115873e4", 00:10:04.800 "assigned_rate_limits": { 00:10:04.800 "rw_ios_per_sec": 0, 00:10:04.800 "rw_mbytes_per_sec": 0, 00:10:04.800 "r_mbytes_per_sec": 0, 00:10:04.800 "w_mbytes_per_sec": 0 00:10:04.800 }, 00:10:04.800 "claimed": false, 00:10:04.800 "zoned": false, 00:10:04.800 "supported_io_types": { 00:10:04.800 "read": true, 00:10:04.800 "write": true, 00:10:04.800 "unmap": true, 00:10:04.800 "flush": true, 00:10:04.800 "reset": true, 00:10:04.800 "nvme_admin": false, 00:10:04.800 "nvme_io": false, 00:10:04.800 "nvme_io_md": false, 00:10:04.800 "write_zeroes": true, 00:10:04.800 "zcopy": true, 00:10:04.800 "get_zone_info": false, 00:10:04.800 "zone_management": false, 00:10:04.800 "zone_append": false, 00:10:04.800 "compare": false, 00:10:04.800 "compare_and_write": false, 00:10:04.800 "abort": true, 00:10:04.800 "seek_hole": false, 00:10:04.800 "seek_data": false, 00:10:04.800 "copy": true, 00:10:04.800 "nvme_iov_md": false 00:10:04.800 }, 00:10:04.800 "memory_domains": [ 00:10:04.800 { 00:10:04.800 "dma_device_id": "system", 00:10:04.800 "dma_device_type": 1 00:10:04.800 }, 00:10:04.800 { 00:10:04.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.800 "dma_device_type": 2 00:10:04.800 } 00:10:04.800 ], 00:10:04.800 "driver_specific": {} 00:10:04.800 } 00:10:04.800 ] 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.800 BaseBdev3 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.800 [ 00:10:04.800 { 00:10:04.800 "name": "BaseBdev3", 00:10:04.800 "aliases": [ 00:10:04.800 "c1ac4239-4586-4fd8-83e0-05243f60cf7e" 00:10:04.800 ], 00:10:04.800 "product_name": "Malloc disk", 00:10:04.800 "block_size": 512, 00:10:04.800 "num_blocks": 65536, 00:10:04.800 "uuid": "c1ac4239-4586-4fd8-83e0-05243f60cf7e", 00:10:04.800 "assigned_rate_limits": { 00:10:04.800 "rw_ios_per_sec": 0, 00:10:04.800 "rw_mbytes_per_sec": 0, 00:10:04.800 "r_mbytes_per_sec": 0, 00:10:04.800 "w_mbytes_per_sec": 0 00:10:04.800 }, 00:10:04.800 "claimed": false, 00:10:04.800 "zoned": false, 00:10:04.800 "supported_io_types": { 00:10:04.800 "read": true, 00:10:04.800 "write": true, 00:10:04.800 "unmap": true, 00:10:04.800 "flush": true, 00:10:04.800 "reset": true, 00:10:04.800 "nvme_admin": false, 00:10:04.800 "nvme_io": false, 00:10:04.800 "nvme_io_md": false, 00:10:04.800 "write_zeroes": true, 00:10:04.800 "zcopy": true, 00:10:04.800 "get_zone_info": false, 00:10:04.800 "zone_management": false, 00:10:04.800 "zone_append": false, 00:10:04.800 "compare": false, 00:10:04.800 "compare_and_write": false, 00:10:04.800 "abort": true, 00:10:04.800 "seek_hole": false, 00:10:04.800 "seek_data": false, 00:10:04.800 "copy": true, 00:10:04.800 "nvme_iov_md": false 00:10:04.800 }, 00:10:04.800 "memory_domains": [ 00:10:04.800 { 00:10:04.800 "dma_device_id": "system", 00:10:04.800 "dma_device_type": 1 00:10:04.800 }, 00:10:04.800 { 00:10:04.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.800 "dma_device_type": 2 00:10:04.800 } 00:10:04.800 ], 00:10:04.800 "driver_specific": {} 00:10:04.800 } 00:10:04.800 ] 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.800 [2024-10-13 02:24:23.419643] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.800 [2024-10-13 02:24:23.419781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.800 [2024-10-13 02:24:23.419822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.800 [2024-10-13 02:24:23.421662] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.800 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.800 "name": "Existed_Raid", 00:10:04.800 "uuid": "c2cba3b6-1832-484a-b1ab-a7f90336612b", 00:10:04.800 "strip_size_kb": 0, 00:10:04.800 "state": "configuring", 00:10:04.800 "raid_level": "raid1", 00:10:04.800 "superblock": true, 00:10:04.800 "num_base_bdevs": 3, 00:10:04.800 "num_base_bdevs_discovered": 2, 00:10:04.800 "num_base_bdevs_operational": 3, 00:10:04.800 "base_bdevs_list": [ 00:10:04.800 { 00:10:04.800 "name": "BaseBdev1", 00:10:04.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.800 "is_configured": false, 00:10:04.800 "data_offset": 0, 00:10:04.800 "data_size": 0 00:10:04.800 }, 00:10:04.800 { 00:10:04.800 "name": "BaseBdev2", 00:10:04.800 "uuid": "256bd822-cdf7-4938-a0dd-a4eb115873e4", 00:10:04.800 "is_configured": true, 00:10:04.800 "data_offset": 2048, 00:10:04.800 "data_size": 63488 00:10:04.800 }, 00:10:04.800 { 00:10:04.800 "name": "BaseBdev3", 00:10:04.800 "uuid": "c1ac4239-4586-4fd8-83e0-05243f60cf7e", 00:10:04.800 "is_configured": true, 00:10:04.801 "data_offset": 2048, 00:10:04.801 "data_size": 63488 00:10:04.801 } 00:10:04.801 ] 00:10:04.801 }' 00:10:04.801 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.801 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.368 [2024-10-13 02:24:23.819041] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.368 "name": "Existed_Raid", 00:10:05.368 "uuid": "c2cba3b6-1832-484a-b1ab-a7f90336612b", 00:10:05.368 "strip_size_kb": 0, 00:10:05.368 "state": "configuring", 00:10:05.368 "raid_level": "raid1", 00:10:05.368 "superblock": true, 00:10:05.368 "num_base_bdevs": 3, 00:10:05.368 "num_base_bdevs_discovered": 1, 00:10:05.368 "num_base_bdevs_operational": 3, 00:10:05.368 "base_bdevs_list": [ 00:10:05.368 { 00:10:05.368 "name": "BaseBdev1", 00:10:05.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.368 "is_configured": false, 00:10:05.368 "data_offset": 0, 00:10:05.368 "data_size": 0 00:10:05.368 }, 00:10:05.368 { 00:10:05.368 "name": null, 00:10:05.368 "uuid": "256bd822-cdf7-4938-a0dd-a4eb115873e4", 00:10:05.368 "is_configured": false, 00:10:05.368 "data_offset": 0, 00:10:05.368 "data_size": 63488 00:10:05.368 }, 00:10:05.368 { 00:10:05.368 "name": "BaseBdev3", 00:10:05.368 "uuid": "c1ac4239-4586-4fd8-83e0-05243f60cf7e", 00:10:05.368 "is_configured": true, 00:10:05.368 "data_offset": 2048, 00:10:05.368 "data_size": 63488 00:10:05.368 } 00:10:05.368 ] 00:10:05.368 }' 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.368 02:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.628 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.628 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.628 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.628 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:05.628 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.628 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:05.628 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:05.628 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.628 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.888 [2024-10-13 02:24:24.313028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.888 BaseBdev1 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.888 [ 00:10:05.888 { 00:10:05.888 "name": "BaseBdev1", 00:10:05.888 "aliases": [ 00:10:05.888 "ab92bbab-96a2-492f-ba45-54047ea18d49" 00:10:05.888 ], 00:10:05.888 "product_name": "Malloc disk", 00:10:05.888 "block_size": 512, 00:10:05.888 "num_blocks": 65536, 00:10:05.888 "uuid": "ab92bbab-96a2-492f-ba45-54047ea18d49", 00:10:05.888 "assigned_rate_limits": { 00:10:05.888 "rw_ios_per_sec": 0, 00:10:05.888 "rw_mbytes_per_sec": 0, 00:10:05.888 "r_mbytes_per_sec": 0, 00:10:05.888 "w_mbytes_per_sec": 0 00:10:05.888 }, 00:10:05.888 "claimed": true, 00:10:05.888 "claim_type": "exclusive_write", 00:10:05.888 "zoned": false, 00:10:05.888 "supported_io_types": { 00:10:05.888 "read": true, 00:10:05.888 "write": true, 00:10:05.888 "unmap": true, 00:10:05.888 "flush": true, 00:10:05.888 "reset": true, 00:10:05.888 "nvme_admin": false, 00:10:05.888 "nvme_io": false, 00:10:05.888 "nvme_io_md": false, 00:10:05.888 "write_zeroes": true, 00:10:05.888 "zcopy": true, 00:10:05.888 "get_zone_info": false, 00:10:05.888 "zone_management": false, 00:10:05.888 "zone_append": false, 00:10:05.888 "compare": false, 00:10:05.888 "compare_and_write": false, 00:10:05.888 "abort": true, 00:10:05.888 "seek_hole": false, 00:10:05.888 "seek_data": false, 00:10:05.888 "copy": true, 00:10:05.888 "nvme_iov_md": false 00:10:05.888 }, 00:10:05.888 "memory_domains": [ 00:10:05.888 { 00:10:05.888 "dma_device_id": "system", 00:10:05.888 "dma_device_type": 1 00:10:05.888 }, 00:10:05.888 { 00:10:05.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.888 "dma_device_type": 2 00:10:05.888 } 00:10:05.888 ], 00:10:05.888 "driver_specific": {} 00:10:05.888 } 00:10:05.888 ] 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.888 "name": "Existed_Raid", 00:10:05.888 "uuid": "c2cba3b6-1832-484a-b1ab-a7f90336612b", 00:10:05.888 "strip_size_kb": 0, 00:10:05.888 "state": "configuring", 00:10:05.888 "raid_level": "raid1", 00:10:05.888 "superblock": true, 00:10:05.888 "num_base_bdevs": 3, 00:10:05.888 "num_base_bdevs_discovered": 2, 00:10:05.888 "num_base_bdevs_operational": 3, 00:10:05.888 "base_bdevs_list": [ 00:10:05.888 { 00:10:05.888 "name": "BaseBdev1", 00:10:05.888 "uuid": "ab92bbab-96a2-492f-ba45-54047ea18d49", 00:10:05.888 "is_configured": true, 00:10:05.888 "data_offset": 2048, 00:10:05.888 "data_size": 63488 00:10:05.888 }, 00:10:05.888 { 00:10:05.888 "name": null, 00:10:05.888 "uuid": "256bd822-cdf7-4938-a0dd-a4eb115873e4", 00:10:05.888 "is_configured": false, 00:10:05.888 "data_offset": 0, 00:10:05.888 "data_size": 63488 00:10:05.888 }, 00:10:05.888 { 00:10:05.888 "name": "BaseBdev3", 00:10:05.888 "uuid": "c1ac4239-4586-4fd8-83e0-05243f60cf7e", 00:10:05.888 "is_configured": true, 00:10:05.888 "data_offset": 2048, 00:10:05.888 "data_size": 63488 00:10:05.888 } 00:10:05.888 ] 00:10:05.888 }' 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.888 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.147 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:06.148 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.148 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.148 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.148 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.148 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:06.148 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:06.148 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.407 [2024-10-13 02:24:24.836224] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.407 "name": "Existed_Raid", 00:10:06.407 "uuid": "c2cba3b6-1832-484a-b1ab-a7f90336612b", 00:10:06.407 "strip_size_kb": 0, 00:10:06.407 "state": "configuring", 00:10:06.407 "raid_level": "raid1", 00:10:06.407 "superblock": true, 00:10:06.407 "num_base_bdevs": 3, 00:10:06.407 "num_base_bdevs_discovered": 1, 00:10:06.407 "num_base_bdevs_operational": 3, 00:10:06.407 "base_bdevs_list": [ 00:10:06.407 { 00:10:06.407 "name": "BaseBdev1", 00:10:06.407 "uuid": "ab92bbab-96a2-492f-ba45-54047ea18d49", 00:10:06.407 "is_configured": true, 00:10:06.407 "data_offset": 2048, 00:10:06.407 "data_size": 63488 00:10:06.407 }, 00:10:06.407 { 00:10:06.407 "name": null, 00:10:06.407 "uuid": "256bd822-cdf7-4938-a0dd-a4eb115873e4", 00:10:06.407 "is_configured": false, 00:10:06.407 "data_offset": 0, 00:10:06.407 "data_size": 63488 00:10:06.407 }, 00:10:06.407 { 00:10:06.407 "name": null, 00:10:06.407 "uuid": "c1ac4239-4586-4fd8-83e0-05243f60cf7e", 00:10:06.407 "is_configured": false, 00:10:06.407 "data_offset": 0, 00:10:06.407 "data_size": 63488 00:10:06.407 } 00:10:06.407 ] 00:10:06.407 }' 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.407 02:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.667 [2024-10-13 02:24:25.315427] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.667 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.926 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.926 "name": "Existed_Raid", 00:10:06.926 "uuid": "c2cba3b6-1832-484a-b1ab-a7f90336612b", 00:10:06.926 "strip_size_kb": 0, 00:10:06.926 "state": "configuring", 00:10:06.926 "raid_level": "raid1", 00:10:06.926 "superblock": true, 00:10:06.926 "num_base_bdevs": 3, 00:10:06.926 "num_base_bdevs_discovered": 2, 00:10:06.926 "num_base_bdevs_operational": 3, 00:10:06.926 "base_bdevs_list": [ 00:10:06.926 { 00:10:06.926 "name": "BaseBdev1", 00:10:06.926 "uuid": "ab92bbab-96a2-492f-ba45-54047ea18d49", 00:10:06.926 "is_configured": true, 00:10:06.926 "data_offset": 2048, 00:10:06.926 "data_size": 63488 00:10:06.926 }, 00:10:06.926 { 00:10:06.926 "name": null, 00:10:06.926 "uuid": "256bd822-cdf7-4938-a0dd-a4eb115873e4", 00:10:06.926 "is_configured": false, 00:10:06.926 "data_offset": 0, 00:10:06.926 "data_size": 63488 00:10:06.926 }, 00:10:06.926 { 00:10:06.926 "name": "BaseBdev3", 00:10:06.927 "uuid": "c1ac4239-4586-4fd8-83e0-05243f60cf7e", 00:10:06.927 "is_configured": true, 00:10:06.927 "data_offset": 2048, 00:10:06.927 "data_size": 63488 00:10:06.927 } 00:10:06.927 ] 00:10:06.927 }' 00:10:06.927 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.927 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.186 [2024-10-13 02:24:25.806631] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.186 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.446 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.446 "name": "Existed_Raid", 00:10:07.446 "uuid": "c2cba3b6-1832-484a-b1ab-a7f90336612b", 00:10:07.446 "strip_size_kb": 0, 00:10:07.446 "state": "configuring", 00:10:07.446 "raid_level": "raid1", 00:10:07.446 "superblock": true, 00:10:07.446 "num_base_bdevs": 3, 00:10:07.446 "num_base_bdevs_discovered": 1, 00:10:07.446 "num_base_bdevs_operational": 3, 00:10:07.446 "base_bdevs_list": [ 00:10:07.446 { 00:10:07.446 "name": null, 00:10:07.446 "uuid": "ab92bbab-96a2-492f-ba45-54047ea18d49", 00:10:07.446 "is_configured": false, 00:10:07.446 "data_offset": 0, 00:10:07.446 "data_size": 63488 00:10:07.446 }, 00:10:07.446 { 00:10:07.446 "name": null, 00:10:07.446 "uuid": "256bd822-cdf7-4938-a0dd-a4eb115873e4", 00:10:07.446 "is_configured": false, 00:10:07.446 "data_offset": 0, 00:10:07.446 "data_size": 63488 00:10:07.446 }, 00:10:07.446 { 00:10:07.446 "name": "BaseBdev3", 00:10:07.446 "uuid": "c1ac4239-4586-4fd8-83e0-05243f60cf7e", 00:10:07.446 "is_configured": true, 00:10:07.446 "data_offset": 2048, 00:10:07.446 "data_size": 63488 00:10:07.446 } 00:10:07.446 ] 00:10:07.446 }' 00:10:07.446 02:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.446 02:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.706 [2024-10-13 02:24:26.320235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.706 "name": "Existed_Raid", 00:10:07.706 "uuid": "c2cba3b6-1832-484a-b1ab-a7f90336612b", 00:10:07.706 "strip_size_kb": 0, 00:10:07.706 "state": "configuring", 00:10:07.706 "raid_level": "raid1", 00:10:07.706 "superblock": true, 00:10:07.706 "num_base_bdevs": 3, 00:10:07.706 "num_base_bdevs_discovered": 2, 00:10:07.706 "num_base_bdevs_operational": 3, 00:10:07.706 "base_bdevs_list": [ 00:10:07.706 { 00:10:07.706 "name": null, 00:10:07.706 "uuid": "ab92bbab-96a2-492f-ba45-54047ea18d49", 00:10:07.706 "is_configured": false, 00:10:07.706 "data_offset": 0, 00:10:07.706 "data_size": 63488 00:10:07.706 }, 00:10:07.706 { 00:10:07.706 "name": "BaseBdev2", 00:10:07.706 "uuid": "256bd822-cdf7-4938-a0dd-a4eb115873e4", 00:10:07.706 "is_configured": true, 00:10:07.706 "data_offset": 2048, 00:10:07.706 "data_size": 63488 00:10:07.706 }, 00:10:07.706 { 00:10:07.706 "name": "BaseBdev3", 00:10:07.706 "uuid": "c1ac4239-4586-4fd8-83e0-05243f60cf7e", 00:10:07.706 "is_configured": true, 00:10:07.706 "data_offset": 2048, 00:10:07.706 "data_size": 63488 00:10:07.706 } 00:10:07.706 ] 00:10:07.706 }' 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.706 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ab92bbab-96a2-492f-ba45-54047ea18d49 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.276 [2024-10-13 02:24:26.846262] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:08.276 [2024-10-13 02:24:26.846536] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:08.276 [2024-10-13 02:24:26.846588] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:08.276 [2024-10-13 02:24:26.846851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:10:08.276 NewBaseBdev 00:10:08.276 [2024-10-13 02:24:26.847050] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:08.276 [2024-10-13 02:24:26.847070] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:08.276 [2024-10-13 02:24:26.847171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.276 [ 00:10:08.276 { 00:10:08.276 "name": "NewBaseBdev", 00:10:08.276 "aliases": [ 00:10:08.276 "ab92bbab-96a2-492f-ba45-54047ea18d49" 00:10:08.276 ], 00:10:08.276 "product_name": "Malloc disk", 00:10:08.276 "block_size": 512, 00:10:08.276 "num_blocks": 65536, 00:10:08.276 "uuid": "ab92bbab-96a2-492f-ba45-54047ea18d49", 00:10:08.276 "assigned_rate_limits": { 00:10:08.276 "rw_ios_per_sec": 0, 00:10:08.276 "rw_mbytes_per_sec": 0, 00:10:08.276 "r_mbytes_per_sec": 0, 00:10:08.276 "w_mbytes_per_sec": 0 00:10:08.276 }, 00:10:08.276 "claimed": true, 00:10:08.276 "claim_type": "exclusive_write", 00:10:08.276 "zoned": false, 00:10:08.276 "supported_io_types": { 00:10:08.276 "read": true, 00:10:08.276 "write": true, 00:10:08.276 "unmap": true, 00:10:08.276 "flush": true, 00:10:08.276 "reset": true, 00:10:08.276 "nvme_admin": false, 00:10:08.276 "nvme_io": false, 00:10:08.276 "nvme_io_md": false, 00:10:08.276 "write_zeroes": true, 00:10:08.276 "zcopy": true, 00:10:08.276 "get_zone_info": false, 00:10:08.276 "zone_management": false, 00:10:08.276 "zone_append": false, 00:10:08.276 "compare": false, 00:10:08.276 "compare_and_write": false, 00:10:08.276 "abort": true, 00:10:08.276 "seek_hole": false, 00:10:08.276 "seek_data": false, 00:10:08.276 "copy": true, 00:10:08.276 "nvme_iov_md": false 00:10:08.276 }, 00:10:08.276 "memory_domains": [ 00:10:08.276 { 00:10:08.276 "dma_device_id": "system", 00:10:08.276 "dma_device_type": 1 00:10:08.276 }, 00:10:08.276 { 00:10:08.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.276 "dma_device_type": 2 00:10:08.276 } 00:10:08.276 ], 00:10:08.276 "driver_specific": {} 00:10:08.276 } 00:10:08.276 ] 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.276 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.276 "name": "Existed_Raid", 00:10:08.276 "uuid": "c2cba3b6-1832-484a-b1ab-a7f90336612b", 00:10:08.276 "strip_size_kb": 0, 00:10:08.277 "state": "online", 00:10:08.277 "raid_level": "raid1", 00:10:08.277 "superblock": true, 00:10:08.277 "num_base_bdevs": 3, 00:10:08.277 "num_base_bdevs_discovered": 3, 00:10:08.277 "num_base_bdevs_operational": 3, 00:10:08.277 "base_bdevs_list": [ 00:10:08.277 { 00:10:08.277 "name": "NewBaseBdev", 00:10:08.277 "uuid": "ab92bbab-96a2-492f-ba45-54047ea18d49", 00:10:08.277 "is_configured": true, 00:10:08.277 "data_offset": 2048, 00:10:08.277 "data_size": 63488 00:10:08.277 }, 00:10:08.277 { 00:10:08.277 "name": "BaseBdev2", 00:10:08.277 "uuid": "256bd822-cdf7-4938-a0dd-a4eb115873e4", 00:10:08.277 "is_configured": true, 00:10:08.277 "data_offset": 2048, 00:10:08.277 "data_size": 63488 00:10:08.277 }, 00:10:08.277 { 00:10:08.277 "name": "BaseBdev3", 00:10:08.277 "uuid": "c1ac4239-4586-4fd8-83e0-05243f60cf7e", 00:10:08.277 "is_configured": true, 00:10:08.277 "data_offset": 2048, 00:10:08.277 "data_size": 63488 00:10:08.277 } 00:10:08.277 ] 00:10:08.277 }' 00:10:08.277 02:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.277 02:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.845 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:08.845 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:08.845 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.845 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.845 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.845 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.845 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:08.845 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.845 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.845 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.845 [2024-10-13 02:24:27.289895] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.846 "name": "Existed_Raid", 00:10:08.846 "aliases": [ 00:10:08.846 "c2cba3b6-1832-484a-b1ab-a7f90336612b" 00:10:08.846 ], 00:10:08.846 "product_name": "Raid Volume", 00:10:08.846 "block_size": 512, 00:10:08.846 "num_blocks": 63488, 00:10:08.846 "uuid": "c2cba3b6-1832-484a-b1ab-a7f90336612b", 00:10:08.846 "assigned_rate_limits": { 00:10:08.846 "rw_ios_per_sec": 0, 00:10:08.846 "rw_mbytes_per_sec": 0, 00:10:08.846 "r_mbytes_per_sec": 0, 00:10:08.846 "w_mbytes_per_sec": 0 00:10:08.846 }, 00:10:08.846 "claimed": false, 00:10:08.846 "zoned": false, 00:10:08.846 "supported_io_types": { 00:10:08.846 "read": true, 00:10:08.846 "write": true, 00:10:08.846 "unmap": false, 00:10:08.846 "flush": false, 00:10:08.846 "reset": true, 00:10:08.846 "nvme_admin": false, 00:10:08.846 "nvme_io": false, 00:10:08.846 "nvme_io_md": false, 00:10:08.846 "write_zeroes": true, 00:10:08.846 "zcopy": false, 00:10:08.846 "get_zone_info": false, 00:10:08.846 "zone_management": false, 00:10:08.846 "zone_append": false, 00:10:08.846 "compare": false, 00:10:08.846 "compare_and_write": false, 00:10:08.846 "abort": false, 00:10:08.846 "seek_hole": false, 00:10:08.846 "seek_data": false, 00:10:08.846 "copy": false, 00:10:08.846 "nvme_iov_md": false 00:10:08.846 }, 00:10:08.846 "memory_domains": [ 00:10:08.846 { 00:10:08.846 "dma_device_id": "system", 00:10:08.846 "dma_device_type": 1 00:10:08.846 }, 00:10:08.846 { 00:10:08.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.846 "dma_device_type": 2 00:10:08.846 }, 00:10:08.846 { 00:10:08.846 "dma_device_id": "system", 00:10:08.846 "dma_device_type": 1 00:10:08.846 }, 00:10:08.846 { 00:10:08.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.846 "dma_device_type": 2 00:10:08.846 }, 00:10:08.846 { 00:10:08.846 "dma_device_id": "system", 00:10:08.846 "dma_device_type": 1 00:10:08.846 }, 00:10:08.846 { 00:10:08.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.846 "dma_device_type": 2 00:10:08.846 } 00:10:08.846 ], 00:10:08.846 "driver_specific": { 00:10:08.846 "raid": { 00:10:08.846 "uuid": "c2cba3b6-1832-484a-b1ab-a7f90336612b", 00:10:08.846 "strip_size_kb": 0, 00:10:08.846 "state": "online", 00:10:08.846 "raid_level": "raid1", 00:10:08.846 "superblock": true, 00:10:08.846 "num_base_bdevs": 3, 00:10:08.846 "num_base_bdevs_discovered": 3, 00:10:08.846 "num_base_bdevs_operational": 3, 00:10:08.846 "base_bdevs_list": [ 00:10:08.846 { 00:10:08.846 "name": "NewBaseBdev", 00:10:08.846 "uuid": "ab92bbab-96a2-492f-ba45-54047ea18d49", 00:10:08.846 "is_configured": true, 00:10:08.846 "data_offset": 2048, 00:10:08.846 "data_size": 63488 00:10:08.846 }, 00:10:08.846 { 00:10:08.846 "name": "BaseBdev2", 00:10:08.846 "uuid": "256bd822-cdf7-4938-a0dd-a4eb115873e4", 00:10:08.846 "is_configured": true, 00:10:08.846 "data_offset": 2048, 00:10:08.846 "data_size": 63488 00:10:08.846 }, 00:10:08.846 { 00:10:08.846 "name": "BaseBdev3", 00:10:08.846 "uuid": "c1ac4239-4586-4fd8-83e0-05243f60cf7e", 00:10:08.846 "is_configured": true, 00:10:08.846 "data_offset": 2048, 00:10:08.846 "data_size": 63488 00:10:08.846 } 00:10:08.846 ] 00:10:08.846 } 00:10:08.846 } 00:10:08.846 }' 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:08.846 BaseBdev2 00:10:08.846 BaseBdev3' 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.846 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.846 [2024-10-13 02:24:27.525172] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:08.846 [2024-10-13 02:24:27.525291] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.846 [2024-10-13 02:24:27.525388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.846 [2024-10-13 02:24:27.525656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.846 [2024-10-13 02:24:27.525707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:09.106 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.106 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78944 00:10:09.106 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 78944 ']' 00:10:09.106 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 78944 00:10:09.106 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:09.106 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:09.106 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78944 00:10:09.106 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:09.106 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:09.106 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78944' 00:10:09.106 killing process with pid 78944 00:10:09.106 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 78944 00:10:09.106 [2024-10-13 02:24:27.564380] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:09.106 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 78944 00:10:09.106 [2024-10-13 02:24:27.595662] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.366 02:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:09.366 00:10:09.366 real 0m8.767s 00:10:09.366 user 0m14.884s 00:10:09.366 sys 0m1.871s 00:10:09.366 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:09.366 02:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.366 ************************************ 00:10:09.366 END TEST raid_state_function_test_sb 00:10:09.366 ************************************ 00:10:09.366 02:24:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:09.366 02:24:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:09.366 02:24:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.366 02:24:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.366 ************************************ 00:10:09.366 START TEST raid_superblock_test 00:10:09.366 ************************************ 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79548 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79548 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79548 ']' 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:09.366 02:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.366 [2024-10-13 02:24:27.996123] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:09.366 [2024-10-13 02:24:27.996344] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79548 ] 00:10:09.625 [2024-10-13 02:24:28.139834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.625 [2024-10-13 02:24:28.188055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.625 [2024-10-13 02:24:28.230898] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.625 [2024-10-13 02:24:28.231025] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.194 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:10.194 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:10.194 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:10.194 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.194 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:10.194 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:10.194 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:10.194 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.194 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.194 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.194 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:10.194 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.194 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.194 malloc1 00:10:10.194 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.194 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:10.194 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.454 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.454 [2024-10-13 02:24:28.881037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:10.454 [2024-10-13 02:24:28.881185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.454 [2024-10-13 02:24:28.881222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:10.454 [2024-10-13 02:24:28.881269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.454 [2024-10-13 02:24:28.883503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.454 [2024-10-13 02:24:28.883584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:10.454 pt1 00:10:10.454 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.454 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.454 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.454 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:10.454 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:10.454 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:10.454 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.454 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.454 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.454 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:10.454 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.454 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.454 malloc2 00:10:10.454 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.454 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:10.454 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.454 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.455 [2024-10-13 02:24:28.923037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:10.455 [2024-10-13 02:24:28.923214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.455 [2024-10-13 02:24:28.923258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:10.455 [2024-10-13 02:24:28.923302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.455 [2024-10-13 02:24:28.925564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.455 [2024-10-13 02:24:28.925638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:10.455 pt2 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.455 malloc3 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.455 [2024-10-13 02:24:28.951823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:10.455 [2024-10-13 02:24:28.952010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.455 [2024-10-13 02:24:28.952049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:10.455 [2024-10-13 02:24:28.952083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.455 [2024-10-13 02:24:28.954253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.455 [2024-10-13 02:24:28.954328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:10.455 pt3 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.455 [2024-10-13 02:24:28.963852] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:10.455 [2024-10-13 02:24:28.965694] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:10.455 [2024-10-13 02:24:28.965789] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:10.455 [2024-10-13 02:24:28.965967] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:10.455 [2024-10-13 02:24:28.966011] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:10.455 [2024-10-13 02:24:28.966302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:10.455 [2024-10-13 02:24:28.966483] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:10.455 [2024-10-13 02:24:28.966529] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:10.455 [2024-10-13 02:24:28.966694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.455 02:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.455 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.455 "name": "raid_bdev1", 00:10:10.455 "uuid": "8cd09d91-4216-44df-b9ca-312f1049d86f", 00:10:10.455 "strip_size_kb": 0, 00:10:10.455 "state": "online", 00:10:10.455 "raid_level": "raid1", 00:10:10.455 "superblock": true, 00:10:10.455 "num_base_bdevs": 3, 00:10:10.455 "num_base_bdevs_discovered": 3, 00:10:10.455 "num_base_bdevs_operational": 3, 00:10:10.455 "base_bdevs_list": [ 00:10:10.455 { 00:10:10.455 "name": "pt1", 00:10:10.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.455 "is_configured": true, 00:10:10.455 "data_offset": 2048, 00:10:10.455 "data_size": 63488 00:10:10.455 }, 00:10:10.455 { 00:10:10.455 "name": "pt2", 00:10:10.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.455 "is_configured": true, 00:10:10.455 "data_offset": 2048, 00:10:10.455 "data_size": 63488 00:10:10.455 }, 00:10:10.455 { 00:10:10.455 "name": "pt3", 00:10:10.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:10.455 "is_configured": true, 00:10:10.455 "data_offset": 2048, 00:10:10.455 "data_size": 63488 00:10:10.455 } 00:10:10.455 ] 00:10:10.455 }' 00:10:10.455 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.455 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.025 [2024-10-13 02:24:29.439357] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.025 "name": "raid_bdev1", 00:10:11.025 "aliases": [ 00:10:11.025 "8cd09d91-4216-44df-b9ca-312f1049d86f" 00:10:11.025 ], 00:10:11.025 "product_name": "Raid Volume", 00:10:11.025 "block_size": 512, 00:10:11.025 "num_blocks": 63488, 00:10:11.025 "uuid": "8cd09d91-4216-44df-b9ca-312f1049d86f", 00:10:11.025 "assigned_rate_limits": { 00:10:11.025 "rw_ios_per_sec": 0, 00:10:11.025 "rw_mbytes_per_sec": 0, 00:10:11.025 "r_mbytes_per_sec": 0, 00:10:11.025 "w_mbytes_per_sec": 0 00:10:11.025 }, 00:10:11.025 "claimed": false, 00:10:11.025 "zoned": false, 00:10:11.025 "supported_io_types": { 00:10:11.025 "read": true, 00:10:11.025 "write": true, 00:10:11.025 "unmap": false, 00:10:11.025 "flush": false, 00:10:11.025 "reset": true, 00:10:11.025 "nvme_admin": false, 00:10:11.025 "nvme_io": false, 00:10:11.025 "nvme_io_md": false, 00:10:11.025 "write_zeroes": true, 00:10:11.025 "zcopy": false, 00:10:11.025 "get_zone_info": false, 00:10:11.025 "zone_management": false, 00:10:11.025 "zone_append": false, 00:10:11.025 "compare": false, 00:10:11.025 "compare_and_write": false, 00:10:11.025 "abort": false, 00:10:11.025 "seek_hole": false, 00:10:11.025 "seek_data": false, 00:10:11.025 "copy": false, 00:10:11.025 "nvme_iov_md": false 00:10:11.025 }, 00:10:11.025 "memory_domains": [ 00:10:11.025 { 00:10:11.025 "dma_device_id": "system", 00:10:11.025 "dma_device_type": 1 00:10:11.025 }, 00:10:11.025 { 00:10:11.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.025 "dma_device_type": 2 00:10:11.025 }, 00:10:11.025 { 00:10:11.025 "dma_device_id": "system", 00:10:11.025 "dma_device_type": 1 00:10:11.025 }, 00:10:11.025 { 00:10:11.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.025 "dma_device_type": 2 00:10:11.025 }, 00:10:11.025 { 00:10:11.025 "dma_device_id": "system", 00:10:11.025 "dma_device_type": 1 00:10:11.025 }, 00:10:11.025 { 00:10:11.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.025 "dma_device_type": 2 00:10:11.025 } 00:10:11.025 ], 00:10:11.025 "driver_specific": { 00:10:11.025 "raid": { 00:10:11.025 "uuid": "8cd09d91-4216-44df-b9ca-312f1049d86f", 00:10:11.025 "strip_size_kb": 0, 00:10:11.025 "state": "online", 00:10:11.025 "raid_level": "raid1", 00:10:11.025 "superblock": true, 00:10:11.025 "num_base_bdevs": 3, 00:10:11.025 "num_base_bdevs_discovered": 3, 00:10:11.025 "num_base_bdevs_operational": 3, 00:10:11.025 "base_bdevs_list": [ 00:10:11.025 { 00:10:11.025 "name": "pt1", 00:10:11.025 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.025 "is_configured": true, 00:10:11.025 "data_offset": 2048, 00:10:11.025 "data_size": 63488 00:10:11.025 }, 00:10:11.025 { 00:10:11.025 "name": "pt2", 00:10:11.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.025 "is_configured": true, 00:10:11.025 "data_offset": 2048, 00:10:11.025 "data_size": 63488 00:10:11.025 }, 00:10:11.025 { 00:10:11.025 "name": "pt3", 00:10:11.025 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.025 "is_configured": true, 00:10:11.025 "data_offset": 2048, 00:10:11.025 "data_size": 63488 00:10:11.025 } 00:10:11.025 ] 00:10:11.025 } 00:10:11.025 } 00:10:11.025 }' 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:11.025 pt2 00:10:11.025 pt3' 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.025 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.026 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.026 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.026 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.026 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:11.026 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.026 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.026 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.026 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.026 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.026 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.026 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.026 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:11.026 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.026 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.026 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.286 [2024-10-13 02:24:29.726828] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8cd09d91-4216-44df-b9ca-312f1049d86f 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8cd09d91-4216-44df-b9ca-312f1049d86f ']' 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.286 [2024-10-13 02:24:29.770468] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.286 [2024-10-13 02:24:29.770543] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.286 [2024-10-13 02:24:29.770637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.286 [2024-10-13 02:24:29.770735] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.286 [2024-10-13 02:24:29.770771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.286 [2024-10-13 02:24:29.922220] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:11.286 [2024-10-13 02:24:29.924149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:11.286 [2024-10-13 02:24:29.924243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:11.286 [2024-10-13 02:24:29.924325] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:11.286 [2024-10-13 02:24:29.924426] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:11.286 [2024-10-13 02:24:29.924495] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:11.286 [2024-10-13 02:24:29.924585] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.286 [2024-10-13 02:24:29.924625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:10:11.286 request: 00:10:11.286 { 00:10:11.286 "name": "raid_bdev1", 00:10:11.286 "raid_level": "raid1", 00:10:11.286 "base_bdevs": [ 00:10:11.286 "malloc1", 00:10:11.286 "malloc2", 00:10:11.286 "malloc3" 00:10:11.286 ], 00:10:11.286 "superblock": false, 00:10:11.286 "method": "bdev_raid_create", 00:10:11.286 "req_id": 1 00:10:11.286 } 00:10:11.286 Got JSON-RPC error response 00:10:11.286 response: 00:10:11.286 { 00:10:11.286 "code": -17, 00:10:11.286 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:11.286 } 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.286 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.546 [2024-10-13 02:24:29.986085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:11.546 [2024-10-13 02:24:29.986184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.546 [2024-10-13 02:24:29.986215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:11.546 [2024-10-13 02:24:29.986242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.546 [2024-10-13 02:24:29.988408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.546 [2024-10-13 02:24:29.988483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:11.546 [2024-10-13 02:24:29.988576] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:11.546 [2024-10-13 02:24:29.988635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:11.546 pt1 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.546 02:24:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.546 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.546 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.546 "name": "raid_bdev1", 00:10:11.546 "uuid": "8cd09d91-4216-44df-b9ca-312f1049d86f", 00:10:11.546 "strip_size_kb": 0, 00:10:11.546 "state": "configuring", 00:10:11.546 "raid_level": "raid1", 00:10:11.546 "superblock": true, 00:10:11.546 "num_base_bdevs": 3, 00:10:11.546 "num_base_bdevs_discovered": 1, 00:10:11.546 "num_base_bdevs_operational": 3, 00:10:11.546 "base_bdevs_list": [ 00:10:11.546 { 00:10:11.546 "name": "pt1", 00:10:11.546 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.546 "is_configured": true, 00:10:11.546 "data_offset": 2048, 00:10:11.546 "data_size": 63488 00:10:11.546 }, 00:10:11.546 { 00:10:11.546 "name": null, 00:10:11.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.546 "is_configured": false, 00:10:11.546 "data_offset": 2048, 00:10:11.546 "data_size": 63488 00:10:11.546 }, 00:10:11.546 { 00:10:11.546 "name": null, 00:10:11.546 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.546 "is_configured": false, 00:10:11.546 "data_offset": 2048, 00:10:11.546 "data_size": 63488 00:10:11.546 } 00:10:11.546 ] 00:10:11.546 }' 00:10:11.546 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.546 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.808 [2024-10-13 02:24:30.429349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:11.808 [2024-10-13 02:24:30.429493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.808 [2024-10-13 02:24:30.429533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:11.808 [2024-10-13 02:24:30.429565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.808 [2024-10-13 02:24:30.429986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.808 [2024-10-13 02:24:30.430049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:11.808 [2024-10-13 02:24:30.430149] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:11.808 [2024-10-13 02:24:30.430198] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:11.808 pt2 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.808 [2024-10-13 02:24:30.441317] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.808 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.068 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.068 "name": "raid_bdev1", 00:10:12.068 "uuid": "8cd09d91-4216-44df-b9ca-312f1049d86f", 00:10:12.068 "strip_size_kb": 0, 00:10:12.068 "state": "configuring", 00:10:12.068 "raid_level": "raid1", 00:10:12.068 "superblock": true, 00:10:12.068 "num_base_bdevs": 3, 00:10:12.068 "num_base_bdevs_discovered": 1, 00:10:12.068 "num_base_bdevs_operational": 3, 00:10:12.068 "base_bdevs_list": [ 00:10:12.068 { 00:10:12.068 "name": "pt1", 00:10:12.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.068 "is_configured": true, 00:10:12.068 "data_offset": 2048, 00:10:12.068 "data_size": 63488 00:10:12.068 }, 00:10:12.068 { 00:10:12.068 "name": null, 00:10:12.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.068 "is_configured": false, 00:10:12.068 "data_offset": 0, 00:10:12.068 "data_size": 63488 00:10:12.068 }, 00:10:12.068 { 00:10:12.068 "name": null, 00:10:12.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.068 "is_configured": false, 00:10:12.068 "data_offset": 2048, 00:10:12.068 "data_size": 63488 00:10:12.068 } 00:10:12.068 ] 00:10:12.068 }' 00:10:12.068 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.068 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.328 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:12.328 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:12.328 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:12.328 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.328 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.328 [2024-10-13 02:24:30.872615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:12.328 [2024-10-13 02:24:30.872759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.328 [2024-10-13 02:24:30.872783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:12.328 [2024-10-13 02:24:30.872793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.328 [2024-10-13 02:24:30.873213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.328 [2024-10-13 02:24:30.873231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:12.328 [2024-10-13 02:24:30.873310] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:12.328 [2024-10-13 02:24:30.873331] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.328 pt2 00:10:12.328 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.328 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:12.328 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:12.328 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:12.328 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.329 [2024-10-13 02:24:30.884546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:12.329 [2024-10-13 02:24:30.884594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.329 [2024-10-13 02:24:30.884612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:12.329 [2024-10-13 02:24:30.884620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.329 [2024-10-13 02:24:30.884948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.329 [2024-10-13 02:24:30.884963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:12.329 [2024-10-13 02:24:30.885037] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:12.329 [2024-10-13 02:24:30.885065] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:12.329 [2024-10-13 02:24:30.885159] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:12.329 [2024-10-13 02:24:30.885174] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:12.329 [2024-10-13 02:24:30.885386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:12.329 [2024-10-13 02:24:30.885490] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:12.329 [2024-10-13 02:24:30.885500] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:10:12.329 [2024-10-13 02:24:30.885599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.329 pt3 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.329 "name": "raid_bdev1", 00:10:12.329 "uuid": "8cd09d91-4216-44df-b9ca-312f1049d86f", 00:10:12.329 "strip_size_kb": 0, 00:10:12.329 "state": "online", 00:10:12.329 "raid_level": "raid1", 00:10:12.329 "superblock": true, 00:10:12.329 "num_base_bdevs": 3, 00:10:12.329 "num_base_bdevs_discovered": 3, 00:10:12.329 "num_base_bdevs_operational": 3, 00:10:12.329 "base_bdevs_list": [ 00:10:12.329 { 00:10:12.329 "name": "pt1", 00:10:12.329 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.329 "is_configured": true, 00:10:12.329 "data_offset": 2048, 00:10:12.329 "data_size": 63488 00:10:12.329 }, 00:10:12.329 { 00:10:12.329 "name": "pt2", 00:10:12.329 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.329 "is_configured": true, 00:10:12.329 "data_offset": 2048, 00:10:12.329 "data_size": 63488 00:10:12.329 }, 00:10:12.329 { 00:10:12.329 "name": "pt3", 00:10:12.329 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.329 "is_configured": true, 00:10:12.329 "data_offset": 2048, 00:10:12.329 "data_size": 63488 00:10:12.329 } 00:10:12.329 ] 00:10:12.329 }' 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.329 02:24:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:12.898 [2024-10-13 02:24:31.352093] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:12.898 "name": "raid_bdev1", 00:10:12.898 "aliases": [ 00:10:12.898 "8cd09d91-4216-44df-b9ca-312f1049d86f" 00:10:12.898 ], 00:10:12.898 "product_name": "Raid Volume", 00:10:12.898 "block_size": 512, 00:10:12.898 "num_blocks": 63488, 00:10:12.898 "uuid": "8cd09d91-4216-44df-b9ca-312f1049d86f", 00:10:12.898 "assigned_rate_limits": { 00:10:12.898 "rw_ios_per_sec": 0, 00:10:12.898 "rw_mbytes_per_sec": 0, 00:10:12.898 "r_mbytes_per_sec": 0, 00:10:12.898 "w_mbytes_per_sec": 0 00:10:12.898 }, 00:10:12.898 "claimed": false, 00:10:12.898 "zoned": false, 00:10:12.898 "supported_io_types": { 00:10:12.898 "read": true, 00:10:12.898 "write": true, 00:10:12.898 "unmap": false, 00:10:12.898 "flush": false, 00:10:12.898 "reset": true, 00:10:12.898 "nvme_admin": false, 00:10:12.898 "nvme_io": false, 00:10:12.898 "nvme_io_md": false, 00:10:12.898 "write_zeroes": true, 00:10:12.898 "zcopy": false, 00:10:12.898 "get_zone_info": false, 00:10:12.898 "zone_management": false, 00:10:12.898 "zone_append": false, 00:10:12.898 "compare": false, 00:10:12.898 "compare_and_write": false, 00:10:12.898 "abort": false, 00:10:12.898 "seek_hole": false, 00:10:12.898 "seek_data": false, 00:10:12.898 "copy": false, 00:10:12.898 "nvme_iov_md": false 00:10:12.898 }, 00:10:12.898 "memory_domains": [ 00:10:12.898 { 00:10:12.898 "dma_device_id": "system", 00:10:12.898 "dma_device_type": 1 00:10:12.898 }, 00:10:12.898 { 00:10:12.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.898 "dma_device_type": 2 00:10:12.898 }, 00:10:12.898 { 00:10:12.898 "dma_device_id": "system", 00:10:12.898 "dma_device_type": 1 00:10:12.898 }, 00:10:12.898 { 00:10:12.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.898 "dma_device_type": 2 00:10:12.898 }, 00:10:12.898 { 00:10:12.898 "dma_device_id": "system", 00:10:12.898 "dma_device_type": 1 00:10:12.898 }, 00:10:12.898 { 00:10:12.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.898 "dma_device_type": 2 00:10:12.898 } 00:10:12.898 ], 00:10:12.898 "driver_specific": { 00:10:12.898 "raid": { 00:10:12.898 "uuid": "8cd09d91-4216-44df-b9ca-312f1049d86f", 00:10:12.898 "strip_size_kb": 0, 00:10:12.898 "state": "online", 00:10:12.898 "raid_level": "raid1", 00:10:12.898 "superblock": true, 00:10:12.898 "num_base_bdevs": 3, 00:10:12.898 "num_base_bdevs_discovered": 3, 00:10:12.898 "num_base_bdevs_operational": 3, 00:10:12.898 "base_bdevs_list": [ 00:10:12.898 { 00:10:12.898 "name": "pt1", 00:10:12.898 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.898 "is_configured": true, 00:10:12.898 "data_offset": 2048, 00:10:12.898 "data_size": 63488 00:10:12.898 }, 00:10:12.898 { 00:10:12.898 "name": "pt2", 00:10:12.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.898 "is_configured": true, 00:10:12.898 "data_offset": 2048, 00:10:12.898 "data_size": 63488 00:10:12.898 }, 00:10:12.898 { 00:10:12.898 "name": "pt3", 00:10:12.898 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.898 "is_configured": true, 00:10:12.898 "data_offset": 2048, 00:10:12.898 "data_size": 63488 00:10:12.898 } 00:10:12.898 ] 00:10:12.898 } 00:10:12.898 } 00:10:12.898 }' 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:12.898 pt2 00:10:12.898 pt3' 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.898 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.898 [2024-10-13 02:24:31.571686] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8cd09d91-4216-44df-b9ca-312f1049d86f '!=' 8cd09d91-4216-44df-b9ca-312f1049d86f ']' 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.158 [2024-10-13 02:24:31.623407] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.158 "name": "raid_bdev1", 00:10:13.158 "uuid": "8cd09d91-4216-44df-b9ca-312f1049d86f", 00:10:13.158 "strip_size_kb": 0, 00:10:13.158 "state": "online", 00:10:13.158 "raid_level": "raid1", 00:10:13.158 "superblock": true, 00:10:13.158 "num_base_bdevs": 3, 00:10:13.158 "num_base_bdevs_discovered": 2, 00:10:13.158 "num_base_bdevs_operational": 2, 00:10:13.158 "base_bdevs_list": [ 00:10:13.158 { 00:10:13.158 "name": null, 00:10:13.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.158 "is_configured": false, 00:10:13.158 "data_offset": 0, 00:10:13.158 "data_size": 63488 00:10:13.158 }, 00:10:13.158 { 00:10:13.158 "name": "pt2", 00:10:13.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.158 "is_configured": true, 00:10:13.158 "data_offset": 2048, 00:10:13.158 "data_size": 63488 00:10:13.158 }, 00:10:13.158 { 00:10:13.158 "name": "pt3", 00:10:13.158 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.158 "is_configured": true, 00:10:13.158 "data_offset": 2048, 00:10:13.158 "data_size": 63488 00:10:13.158 } 00:10:13.158 ] 00:10:13.158 }' 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.158 02:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.418 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.678 [2024-10-13 02:24:32.106493] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.678 [2024-10-13 02:24:32.106534] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.678 [2024-10-13 02:24:32.106617] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.678 [2024-10-13 02:24:32.106680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.678 [2024-10-13 02:24:32.106689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.678 [2024-10-13 02:24:32.182317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:13.678 [2024-10-13 02:24:32.182425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.678 [2024-10-13 02:24:32.182451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:10:13.678 [2024-10-13 02:24:32.182460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.678 [2024-10-13 02:24:32.184686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.678 [2024-10-13 02:24:32.184759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:13.678 [2024-10-13 02:24:32.184854] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:13.678 [2024-10-13 02:24:32.184918] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:13.678 pt2 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.678 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.679 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.679 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.679 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.679 "name": "raid_bdev1", 00:10:13.679 "uuid": "8cd09d91-4216-44df-b9ca-312f1049d86f", 00:10:13.679 "strip_size_kb": 0, 00:10:13.679 "state": "configuring", 00:10:13.679 "raid_level": "raid1", 00:10:13.679 "superblock": true, 00:10:13.679 "num_base_bdevs": 3, 00:10:13.679 "num_base_bdevs_discovered": 1, 00:10:13.679 "num_base_bdevs_operational": 2, 00:10:13.679 "base_bdevs_list": [ 00:10:13.679 { 00:10:13.679 "name": null, 00:10:13.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.679 "is_configured": false, 00:10:13.679 "data_offset": 2048, 00:10:13.679 "data_size": 63488 00:10:13.679 }, 00:10:13.679 { 00:10:13.679 "name": "pt2", 00:10:13.679 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.679 "is_configured": true, 00:10:13.679 "data_offset": 2048, 00:10:13.679 "data_size": 63488 00:10:13.679 }, 00:10:13.679 { 00:10:13.679 "name": null, 00:10:13.679 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.679 "is_configured": false, 00:10:13.679 "data_offset": 2048, 00:10:13.679 "data_size": 63488 00:10:13.679 } 00:10:13.679 ] 00:10:13.679 }' 00:10:13.679 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.679 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.248 [2024-10-13 02:24:32.649593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:14.248 [2024-10-13 02:24:32.649767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.248 [2024-10-13 02:24:32.649806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:14.248 [2024-10-13 02:24:32.649833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.248 [2024-10-13 02:24:32.650278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.248 [2024-10-13 02:24:32.650338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:14.248 [2024-10-13 02:24:32.650446] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:14.248 [2024-10-13 02:24:32.650504] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:14.248 [2024-10-13 02:24:32.650638] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:14.248 [2024-10-13 02:24:32.650673] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:14.248 [2024-10-13 02:24:32.650939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:14.248 [2024-10-13 02:24:32.651105] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:14.248 [2024-10-13 02:24:32.651148] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:10:14.248 [2024-10-13 02:24:32.651287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.248 pt3 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.248 "name": "raid_bdev1", 00:10:14.248 "uuid": "8cd09d91-4216-44df-b9ca-312f1049d86f", 00:10:14.248 "strip_size_kb": 0, 00:10:14.248 "state": "online", 00:10:14.248 "raid_level": "raid1", 00:10:14.248 "superblock": true, 00:10:14.248 "num_base_bdevs": 3, 00:10:14.248 "num_base_bdevs_discovered": 2, 00:10:14.248 "num_base_bdevs_operational": 2, 00:10:14.248 "base_bdevs_list": [ 00:10:14.248 { 00:10:14.248 "name": null, 00:10:14.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.248 "is_configured": false, 00:10:14.248 "data_offset": 2048, 00:10:14.248 "data_size": 63488 00:10:14.248 }, 00:10:14.248 { 00:10:14.248 "name": "pt2", 00:10:14.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.248 "is_configured": true, 00:10:14.248 "data_offset": 2048, 00:10:14.248 "data_size": 63488 00:10:14.248 }, 00:10:14.248 { 00:10:14.248 "name": "pt3", 00:10:14.248 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.248 "is_configured": true, 00:10:14.248 "data_offset": 2048, 00:10:14.248 "data_size": 63488 00:10:14.248 } 00:10:14.248 ] 00:10:14.248 }' 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.248 02:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.508 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:14.508 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.508 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.508 [2024-10-13 02:24:33.144730] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:14.508 [2024-10-13 02:24:33.144845] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.508 [2024-10-13 02:24:33.144961] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.508 [2024-10-13 02:24:33.145042] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.508 [2024-10-13 02:24:33.145149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:10:14.508 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.508 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.508 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.508 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.508 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:14.508 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.767 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:14.767 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:14.767 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:14.767 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:14.767 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:14.767 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.767 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.767 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.767 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:14.767 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.767 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.767 [2024-10-13 02:24:33.220568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:14.767 [2024-10-13 02:24:33.220682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.767 [2024-10-13 02:24:33.220714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:14.767 [2024-10-13 02:24:33.220743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.767 [2024-10-13 02:24:33.222993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.767 [2024-10-13 02:24:33.223071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:14.767 [2024-10-13 02:24:33.223171] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:14.768 [2024-10-13 02:24:33.223249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:14.768 [2024-10-13 02:24:33.223394] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:14.768 [2024-10-13 02:24:33.223457] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:14.768 [2024-10-13 02:24:33.223499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, stapt1 00:10:14.768 te configuring 00:10:14.768 [2024-10-13 02:24:33.223567] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.768 "name": "raid_bdev1", 00:10:14.768 "uuid": "8cd09d91-4216-44df-b9ca-312f1049d86f", 00:10:14.768 "strip_size_kb": 0, 00:10:14.768 "state": "configuring", 00:10:14.768 "raid_level": "raid1", 00:10:14.768 "superblock": true, 00:10:14.768 "num_base_bdevs": 3, 00:10:14.768 "num_base_bdevs_discovered": 1, 00:10:14.768 "num_base_bdevs_operational": 2, 00:10:14.768 "base_bdevs_list": [ 00:10:14.768 { 00:10:14.768 "name": null, 00:10:14.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.768 "is_configured": false, 00:10:14.768 "data_offset": 2048, 00:10:14.768 "data_size": 63488 00:10:14.768 }, 00:10:14.768 { 00:10:14.768 "name": "pt2", 00:10:14.768 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.768 "is_configured": true, 00:10:14.768 "data_offset": 2048, 00:10:14.768 "data_size": 63488 00:10:14.768 }, 00:10:14.768 { 00:10:14.768 "name": null, 00:10:14.768 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.768 "is_configured": false, 00:10:14.768 "data_offset": 2048, 00:10:14.768 "data_size": 63488 00:10:14.768 } 00:10:14.768 ] 00:10:14.768 }' 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.768 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.028 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:15.028 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.028 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.028 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:15.028 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.287 [2024-10-13 02:24:33.727723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:15.287 [2024-10-13 02:24:33.727906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.287 [2024-10-13 02:24:33.727946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:15.287 [2024-10-13 02:24:33.727976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.287 [2024-10-13 02:24:33.728428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.287 [2024-10-13 02:24:33.728492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:15.287 [2024-10-13 02:24:33.728611] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:15.287 [2024-10-13 02:24:33.728665] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:15.287 [2024-10-13 02:24:33.728790] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:10:15.287 [2024-10-13 02:24:33.728831] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:15.287 [2024-10-13 02:24:33.729089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:15.287 [2024-10-13 02:24:33.729264] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:10:15.287 [2024-10-13 02:24:33.729306] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:10:15.287 [2024-10-13 02:24:33.729452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.287 pt3 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.287 "name": "raid_bdev1", 00:10:15.287 "uuid": "8cd09d91-4216-44df-b9ca-312f1049d86f", 00:10:15.287 "strip_size_kb": 0, 00:10:15.287 "state": "online", 00:10:15.287 "raid_level": "raid1", 00:10:15.287 "superblock": true, 00:10:15.287 "num_base_bdevs": 3, 00:10:15.287 "num_base_bdevs_discovered": 2, 00:10:15.287 "num_base_bdevs_operational": 2, 00:10:15.287 "base_bdevs_list": [ 00:10:15.287 { 00:10:15.287 "name": null, 00:10:15.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.287 "is_configured": false, 00:10:15.287 "data_offset": 2048, 00:10:15.287 "data_size": 63488 00:10:15.287 }, 00:10:15.287 { 00:10:15.287 "name": "pt2", 00:10:15.287 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.287 "is_configured": true, 00:10:15.287 "data_offset": 2048, 00:10:15.287 "data_size": 63488 00:10:15.287 }, 00:10:15.287 { 00:10:15.287 "name": "pt3", 00:10:15.287 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.287 "is_configured": true, 00:10:15.287 "data_offset": 2048, 00:10:15.287 "data_size": 63488 00:10:15.287 } 00:10:15.287 ] 00:10:15.287 }' 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.287 02:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.547 02:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:15.547 02:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.547 02:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.547 02:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:15.547 02:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.805 02:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:15.806 02:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.806 02:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:15.806 02:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.806 02:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.806 [2024-10-13 02:24:34.247441] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.806 02:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.806 02:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8cd09d91-4216-44df-b9ca-312f1049d86f '!=' 8cd09d91-4216-44df-b9ca-312f1049d86f ']' 00:10:15.806 02:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79548 00:10:15.806 02:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79548 ']' 00:10:15.806 02:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79548 00:10:15.806 02:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:15.806 02:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.806 02:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79548 00:10:15.806 killing process with pid 79548 00:10:15.806 02:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:15.806 02:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:15.806 02:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79548' 00:10:15.806 02:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79548 00:10:15.806 [2024-10-13 02:24:34.331386] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.806 [2024-10-13 02:24:34.331512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.806 02:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79548 00:10:15.806 [2024-10-13 02:24:34.331618] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.806 [2024-10-13 02:24:34.331629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:10:15.806 [2024-10-13 02:24:34.393599] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:16.375 ************************************ 00:10:16.375 END TEST raid_superblock_test 00:10:16.375 ************************************ 00:10:16.375 02:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:16.375 00:10:16.375 real 0m6.855s 00:10:16.375 user 0m11.385s 00:10:16.375 sys 0m1.364s 00:10:16.375 02:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:16.375 02:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.375 02:24:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:16.375 02:24:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:16.375 02:24:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:16.375 02:24:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:16.375 ************************************ 00:10:16.375 START TEST raid_read_error_test 00:10:16.375 ************************************ 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8Bky8B6TRO 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79982 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79982 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 79982 ']' 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.375 02:24:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.375 [2024-10-13 02:24:34.938601] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:16.375 [2024-10-13 02:24:34.938831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79982 ] 00:10:16.635 [2024-10-13 02:24:35.083318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.635 [2024-10-13 02:24:35.157984] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.635 [2024-10-13 02:24:35.235570] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.635 [2024-10-13 02:24:35.235623] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.204 BaseBdev1_malloc 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.204 true 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.204 [2024-10-13 02:24:35.826982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:17.204 [2024-10-13 02:24:35.827128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.204 [2024-10-13 02:24:35.827195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:17.204 [2024-10-13 02:24:35.827234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.204 [2024-10-13 02:24:35.829829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.204 [2024-10-13 02:24:35.829929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:17.204 BaseBdev1 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.204 BaseBdev2_malloc 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.204 true 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.204 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.204 [2024-10-13 02:24:35.882357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:17.204 [2024-10-13 02:24:35.882479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.204 [2024-10-13 02:24:35.882531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:17.204 [2024-10-13 02:24:35.882599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.204 [2024-10-13 02:24:35.885155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.204 [2024-10-13 02:24:35.885192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:17.473 BaseBdev2 00:10:17.473 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.473 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:17.473 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:17.473 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.473 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.473 BaseBdev3_malloc 00:10:17.473 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.473 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:17.473 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.473 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.473 true 00:10:17.473 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.473 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:17.473 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.473 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.473 [2024-10-13 02:24:35.929390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:17.473 [2024-10-13 02:24:35.929501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.473 [2024-10-13 02:24:35.929579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:17.473 [2024-10-13 02:24:35.929620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.473 [2024-10-13 02:24:35.932264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.473 [2024-10-13 02:24:35.932343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:17.473 BaseBdev3 00:10:17.473 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.473 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:17.473 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.473 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.473 [2024-10-13 02:24:35.941471] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.473 [2024-10-13 02:24:35.943758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.473 [2024-10-13 02:24:35.943925] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.473 [2024-10-13 02:24:35.944176] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:17.473 [2024-10-13 02:24:35.944233] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:17.474 [2024-10-13 02:24:35.944564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:17.474 [2024-10-13 02:24:35.944787] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:17.474 [2024-10-13 02:24:35.944834] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:10:17.474 [2024-10-13 02:24:35.945071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.474 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.474 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:17.474 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.474 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.474 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.474 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.474 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.474 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.474 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.474 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.474 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.474 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.474 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.474 02:24:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.474 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.474 02:24:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.474 02:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.474 "name": "raid_bdev1", 00:10:17.474 "uuid": "658d028b-ab77-42c6-b9de-8f5ee31065ee", 00:10:17.474 "strip_size_kb": 0, 00:10:17.474 "state": "online", 00:10:17.474 "raid_level": "raid1", 00:10:17.474 "superblock": true, 00:10:17.474 "num_base_bdevs": 3, 00:10:17.474 "num_base_bdevs_discovered": 3, 00:10:17.474 "num_base_bdevs_operational": 3, 00:10:17.474 "base_bdevs_list": [ 00:10:17.474 { 00:10:17.474 "name": "BaseBdev1", 00:10:17.474 "uuid": "4ee7ba72-9cda-5ed1-88bd-eaa59a230552", 00:10:17.474 "is_configured": true, 00:10:17.474 "data_offset": 2048, 00:10:17.474 "data_size": 63488 00:10:17.474 }, 00:10:17.474 { 00:10:17.474 "name": "BaseBdev2", 00:10:17.474 "uuid": "21f9c7fa-9b4a-50e9-a70c-c2e05adb58c5", 00:10:17.474 "is_configured": true, 00:10:17.474 "data_offset": 2048, 00:10:17.474 "data_size": 63488 00:10:17.474 }, 00:10:17.474 { 00:10:17.474 "name": "BaseBdev3", 00:10:17.474 "uuid": "635b183e-a50c-513d-9fd7-21ff5cbf1a7e", 00:10:17.474 "is_configured": true, 00:10:17.474 "data_offset": 2048, 00:10:17.474 "data_size": 63488 00:10:17.474 } 00:10:17.474 ] 00:10:17.474 }' 00:10:17.474 02:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.474 02:24:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.749 02:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:17.749 02:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:18.008 [2024-10-13 02:24:36.485049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.947 "name": "raid_bdev1", 00:10:18.947 "uuid": "658d028b-ab77-42c6-b9de-8f5ee31065ee", 00:10:18.947 "strip_size_kb": 0, 00:10:18.947 "state": "online", 00:10:18.947 "raid_level": "raid1", 00:10:18.947 "superblock": true, 00:10:18.947 "num_base_bdevs": 3, 00:10:18.947 "num_base_bdevs_discovered": 3, 00:10:18.947 "num_base_bdevs_operational": 3, 00:10:18.947 "base_bdevs_list": [ 00:10:18.947 { 00:10:18.947 "name": "BaseBdev1", 00:10:18.947 "uuid": "4ee7ba72-9cda-5ed1-88bd-eaa59a230552", 00:10:18.947 "is_configured": true, 00:10:18.947 "data_offset": 2048, 00:10:18.947 "data_size": 63488 00:10:18.947 }, 00:10:18.947 { 00:10:18.947 "name": "BaseBdev2", 00:10:18.947 "uuid": "21f9c7fa-9b4a-50e9-a70c-c2e05adb58c5", 00:10:18.947 "is_configured": true, 00:10:18.947 "data_offset": 2048, 00:10:18.947 "data_size": 63488 00:10:18.947 }, 00:10:18.947 { 00:10:18.947 "name": "BaseBdev3", 00:10:18.947 "uuid": "635b183e-a50c-513d-9fd7-21ff5cbf1a7e", 00:10:18.947 "is_configured": true, 00:10:18.947 "data_offset": 2048, 00:10:18.947 "data_size": 63488 00:10:18.947 } 00:10:18.947 ] 00:10:18.947 }' 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.947 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.208 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:19.208 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.208 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.208 [2024-10-13 02:24:37.835582] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.208 [2024-10-13 02:24:37.835685] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.208 [2024-10-13 02:24:37.838458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.209 [2024-10-13 02:24:37.838565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.209 [2024-10-13 02:24:37.838720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.209 [2024-10-13 02:24:37.838796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:10:19.209 { 00:10:19.209 "results": [ 00:10:19.209 { 00:10:19.209 "job": "raid_bdev1", 00:10:19.209 "core_mask": "0x1", 00:10:19.209 "workload": "randrw", 00:10:19.209 "percentage": 50, 00:10:19.209 "status": "finished", 00:10:19.209 "queue_depth": 1, 00:10:19.209 "io_size": 131072, 00:10:19.209 "runtime": 1.351036, 00:10:19.209 "iops": 10657.00691913465, 00:10:19.209 "mibps": 1332.1258648918313, 00:10:19.209 "io_failed": 0, 00:10:19.209 "io_timeout": 0, 00:10:19.209 "avg_latency_us": 91.19727958334825, 00:10:19.209 "min_latency_us": 23.36419213973799, 00:10:19.209 "max_latency_us": 1502.46288209607 00:10:19.209 } 00:10:19.209 ], 00:10:19.209 "core_count": 1 00:10:19.209 } 00:10:19.209 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.209 02:24:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79982 00:10:19.209 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 79982 ']' 00:10:19.209 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 79982 00:10:19.209 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:19.209 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:19.209 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79982 00:10:19.209 killing process with pid 79982 00:10:19.209 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:19.209 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:19.209 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79982' 00:10:19.209 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 79982 00:10:19.209 [2024-10-13 02:24:37.888340] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.209 02:24:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 79982 00:10:19.467 [2024-10-13 02:24:37.939311] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.726 02:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8Bky8B6TRO 00:10:19.726 02:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:19.726 02:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:19.726 ************************************ 00:10:19.726 END TEST raid_read_error_test 00:10:19.726 ************************************ 00:10:19.726 02:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:19.726 02:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:19.726 02:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:19.726 02:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:19.726 02:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:19.726 00:10:19.726 real 0m3.487s 00:10:19.726 user 0m4.250s 00:10:19.726 sys 0m0.651s 00:10:19.726 02:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.726 02:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.726 02:24:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:19.726 02:24:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:19.726 02:24:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.726 02:24:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.726 ************************************ 00:10:19.726 START TEST raid_write_error_test 00:10:19.726 ************************************ 00:10:19.726 02:24:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:10:19.726 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:19.726 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:19.726 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:19.726 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:19.726 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.726 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:19.726 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.726 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JEF2hq1iC1 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80117 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80117 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 80117 ']' 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.985 02:24:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.985 [2024-10-13 02:24:38.511815] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:19.985 [2024-10-13 02:24:38.511994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80117 ] 00:10:19.985 [2024-10-13 02:24:38.658329] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.245 [2024-10-13 02:24:38.732839] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.245 [2024-10-13 02:24:38.813355] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.245 [2024-10-13 02:24:38.813390] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.813 BaseBdev1_malloc 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.813 true 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.813 [2024-10-13 02:24:39.393878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:20.813 [2024-10-13 02:24:39.393987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.813 [2024-10-13 02:24:39.394032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:20.813 [2024-10-13 02:24:39.394070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.813 [2024-10-13 02:24:39.396576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.813 [2024-10-13 02:24:39.396649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:20.813 BaseBdev1 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.813 BaseBdev2_malloc 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.813 true 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.813 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:20.814 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.814 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.814 [2024-10-13 02:24:39.450154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:20.814 [2024-10-13 02:24:39.450273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.814 [2024-10-13 02:24:39.450298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:20.814 [2024-10-13 02:24:39.450307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.814 [2024-10-13 02:24:39.452788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.814 [2024-10-13 02:24:39.452863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:20.814 BaseBdev2 00:10:20.814 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.814 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.814 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:20.814 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.814 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.814 BaseBdev3_malloc 00:10:20.814 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.814 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:20.814 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.814 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.814 true 00:10:20.814 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.814 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:20.814 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.814 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.073 [2024-10-13 02:24:39.497208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:21.073 [2024-10-13 02:24:39.497301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.073 [2024-10-13 02:24:39.497326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:21.073 [2024-10-13 02:24:39.497335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.073 [2024-10-13 02:24:39.499891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.073 [2024-10-13 02:24:39.499924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:21.073 BaseBdev3 00:10:21.073 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.073 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:21.073 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.073 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.073 [2024-10-13 02:24:39.509286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.073 [2024-10-13 02:24:39.511432] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.073 [2024-10-13 02:24:39.511570] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:21.073 [2024-10-13 02:24:39.511793] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:21.073 [2024-10-13 02:24:39.511850] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:21.073 [2024-10-13 02:24:39.512145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:21.073 [2024-10-13 02:24:39.512369] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:21.073 [2024-10-13 02:24:39.512412] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:10:21.073 [2024-10-13 02:24:39.512591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.073 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.073 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:21.073 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.073 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.073 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.073 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.073 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.073 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.073 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.073 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.073 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.073 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.074 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.074 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.074 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.074 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.074 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.074 "name": "raid_bdev1", 00:10:21.074 "uuid": "3f62f422-a987-4c08-a9a0-5ca0b3a0240f", 00:10:21.074 "strip_size_kb": 0, 00:10:21.074 "state": "online", 00:10:21.074 "raid_level": "raid1", 00:10:21.074 "superblock": true, 00:10:21.074 "num_base_bdevs": 3, 00:10:21.074 "num_base_bdevs_discovered": 3, 00:10:21.074 "num_base_bdevs_operational": 3, 00:10:21.074 "base_bdevs_list": [ 00:10:21.074 { 00:10:21.074 "name": "BaseBdev1", 00:10:21.074 "uuid": "da9dcbc5-2031-5f1c-8e07-40a0ca53d3be", 00:10:21.074 "is_configured": true, 00:10:21.074 "data_offset": 2048, 00:10:21.074 "data_size": 63488 00:10:21.074 }, 00:10:21.074 { 00:10:21.074 "name": "BaseBdev2", 00:10:21.074 "uuid": "9493f5d6-2f2e-5981-9e3e-c1923c9a7d2f", 00:10:21.074 "is_configured": true, 00:10:21.074 "data_offset": 2048, 00:10:21.074 "data_size": 63488 00:10:21.074 }, 00:10:21.074 { 00:10:21.074 "name": "BaseBdev3", 00:10:21.074 "uuid": "fe611e3d-ca08-590e-8126-2bda015968ba", 00:10:21.074 "is_configured": true, 00:10:21.074 "data_offset": 2048, 00:10:21.074 "data_size": 63488 00:10:21.074 } 00:10:21.074 ] 00:10:21.074 }' 00:10:21.074 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.074 02:24:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.333 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:21.333 02:24:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:21.591 [2024-10-13 02:24:40.024991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.528 [2024-10-13 02:24:40.941446] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:22.528 [2024-10-13 02:24:40.941567] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:22.528 [2024-10-13 02:24:40.941807] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002600 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.528 "name": "raid_bdev1", 00:10:22.528 "uuid": "3f62f422-a987-4c08-a9a0-5ca0b3a0240f", 00:10:22.528 "strip_size_kb": 0, 00:10:22.528 "state": "online", 00:10:22.528 "raid_level": "raid1", 00:10:22.528 "superblock": true, 00:10:22.528 "num_base_bdevs": 3, 00:10:22.528 "num_base_bdevs_discovered": 2, 00:10:22.528 "num_base_bdevs_operational": 2, 00:10:22.528 "base_bdevs_list": [ 00:10:22.528 { 00:10:22.528 "name": null, 00:10:22.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.528 "is_configured": false, 00:10:22.528 "data_offset": 0, 00:10:22.528 "data_size": 63488 00:10:22.528 }, 00:10:22.528 { 00:10:22.528 "name": "BaseBdev2", 00:10:22.528 "uuid": "9493f5d6-2f2e-5981-9e3e-c1923c9a7d2f", 00:10:22.528 "is_configured": true, 00:10:22.528 "data_offset": 2048, 00:10:22.528 "data_size": 63488 00:10:22.528 }, 00:10:22.528 { 00:10:22.528 "name": "BaseBdev3", 00:10:22.528 "uuid": "fe611e3d-ca08-590e-8126-2bda015968ba", 00:10:22.528 "is_configured": true, 00:10:22.528 "data_offset": 2048, 00:10:22.528 "data_size": 63488 00:10:22.528 } 00:10:22.528 ] 00:10:22.528 }' 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.528 02:24:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.788 02:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:22.788 02:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.788 02:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.788 [2024-10-13 02:24:41.384807] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.788 [2024-10-13 02:24:41.384909] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.788 [2024-10-13 02:24:41.387490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.788 [2024-10-13 02:24:41.387595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.788 [2024-10-13 02:24:41.387745] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.788 [2024-10-13 02:24:41.387802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:10:22.788 { 00:10:22.788 "results": [ 00:10:22.788 { 00:10:22.788 "job": "raid_bdev1", 00:10:22.788 "core_mask": "0x1", 00:10:22.788 "workload": "randrw", 00:10:22.788 "percentage": 50, 00:10:22.788 "status": "finished", 00:10:22.788 "queue_depth": 1, 00:10:22.788 "io_size": 131072, 00:10:22.788 "runtime": 1.360194, 00:10:22.788 "iops": 12090.922324315503, 00:10:22.788 "mibps": 1511.3652905394379, 00:10:22.788 "io_failed": 0, 00:10:22.788 "io_timeout": 0, 00:10:22.788 "avg_latency_us": 80.08195141224397, 00:10:22.788 "min_latency_us": 22.358078602620086, 00:10:22.788 "max_latency_us": 1466.6899563318777 00:10:22.788 } 00:10:22.788 ], 00:10:22.788 "core_count": 1 00:10:22.788 } 00:10:22.788 02:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.788 02:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80117 00:10:22.788 02:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 80117 ']' 00:10:22.788 02:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 80117 00:10:22.788 02:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:22.788 02:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.788 02:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80117 00:10:22.788 killing process with pid 80117 00:10:22.788 02:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:22.788 02:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:22.788 02:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80117' 00:10:22.788 02:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 80117 00:10:22.788 [2024-10-13 02:24:41.427175] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.788 02:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 80117 00:10:23.048 [2024-10-13 02:24:41.476194] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.308 02:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JEF2hq1iC1 00:10:23.308 02:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:23.308 02:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:23.309 02:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:23.309 02:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:23.309 02:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:23.309 02:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:23.309 02:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:23.309 00:10:23.309 real 0m3.456s 00:10:23.309 user 0m4.201s 00:10:23.309 sys 0m0.644s 00:10:23.309 02:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.309 02:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.309 ************************************ 00:10:23.309 END TEST raid_write_error_test 00:10:23.309 ************************************ 00:10:23.309 02:24:41 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:23.309 02:24:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:23.309 02:24:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:23.309 02:24:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:23.309 02:24:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.309 02:24:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.309 ************************************ 00:10:23.309 START TEST raid_state_function_test 00:10:23.309 ************************************ 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:23.309 Process raid pid: 80244 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80244 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80244' 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80244 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80244 ']' 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:23.309 02:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.569 [2024-10-13 02:24:42.037740] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:23.569 [2024-10-13 02:24:42.038495] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.569 [2024-10-13 02:24:42.167019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.569 [2024-10-13 02:24:42.241757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.828 [2024-10-13 02:24:42.319983] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.829 [2024-10-13 02:24:42.320023] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.398 [2024-10-13 02:24:42.868081] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.398 [2024-10-13 02:24:42.868199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.398 [2024-10-13 02:24:42.868227] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.398 [2024-10-13 02:24:42.868239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.398 [2024-10-13 02:24:42.868245] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:24.398 [2024-10-13 02:24:42.868258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:24.398 [2024-10-13 02:24:42.868264] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:24.398 [2024-10-13 02:24:42.868274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.398 "name": "Existed_Raid", 00:10:24.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.398 "strip_size_kb": 64, 00:10:24.398 "state": "configuring", 00:10:24.398 "raid_level": "raid0", 00:10:24.398 "superblock": false, 00:10:24.398 "num_base_bdevs": 4, 00:10:24.398 "num_base_bdevs_discovered": 0, 00:10:24.398 "num_base_bdevs_operational": 4, 00:10:24.398 "base_bdevs_list": [ 00:10:24.398 { 00:10:24.398 "name": "BaseBdev1", 00:10:24.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.398 "is_configured": false, 00:10:24.398 "data_offset": 0, 00:10:24.398 "data_size": 0 00:10:24.398 }, 00:10:24.398 { 00:10:24.398 "name": "BaseBdev2", 00:10:24.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.398 "is_configured": false, 00:10:24.398 "data_offset": 0, 00:10:24.398 "data_size": 0 00:10:24.398 }, 00:10:24.398 { 00:10:24.398 "name": "BaseBdev3", 00:10:24.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.398 "is_configured": false, 00:10:24.398 "data_offset": 0, 00:10:24.398 "data_size": 0 00:10:24.398 }, 00:10:24.398 { 00:10:24.398 "name": "BaseBdev4", 00:10:24.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.398 "is_configured": false, 00:10:24.398 "data_offset": 0, 00:10:24.398 "data_size": 0 00:10:24.398 } 00:10:24.398 ] 00:10:24.398 }' 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.398 02:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.658 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:24.658 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.658 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.658 [2024-10-13 02:24:43.307166] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:24.658 [2024-10-13 02:24:43.307272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:24.658 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.658 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:24.658 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.658 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.658 [2024-10-13 02:24:43.319153] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.658 [2024-10-13 02:24:43.319234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.658 [2024-10-13 02:24:43.319263] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.658 [2024-10-13 02:24:43.319287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.658 [2024-10-13 02:24:43.319305] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:24.658 [2024-10-13 02:24:43.319342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:24.658 [2024-10-13 02:24:43.319367] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:24.658 [2024-10-13 02:24:43.319392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:24.658 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.658 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:24.658 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.658 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.918 [2024-10-13 02:24:43.346470] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.919 BaseBdev1 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.919 [ 00:10:24.919 { 00:10:24.919 "name": "BaseBdev1", 00:10:24.919 "aliases": [ 00:10:24.919 "6bf855a8-ace9-4415-b5d6-6b8c7161c3a2" 00:10:24.919 ], 00:10:24.919 "product_name": "Malloc disk", 00:10:24.919 "block_size": 512, 00:10:24.919 "num_blocks": 65536, 00:10:24.919 "uuid": "6bf855a8-ace9-4415-b5d6-6b8c7161c3a2", 00:10:24.919 "assigned_rate_limits": { 00:10:24.919 "rw_ios_per_sec": 0, 00:10:24.919 "rw_mbytes_per_sec": 0, 00:10:24.919 "r_mbytes_per_sec": 0, 00:10:24.919 "w_mbytes_per_sec": 0 00:10:24.919 }, 00:10:24.919 "claimed": true, 00:10:24.919 "claim_type": "exclusive_write", 00:10:24.919 "zoned": false, 00:10:24.919 "supported_io_types": { 00:10:24.919 "read": true, 00:10:24.919 "write": true, 00:10:24.919 "unmap": true, 00:10:24.919 "flush": true, 00:10:24.919 "reset": true, 00:10:24.919 "nvme_admin": false, 00:10:24.919 "nvme_io": false, 00:10:24.919 "nvme_io_md": false, 00:10:24.919 "write_zeroes": true, 00:10:24.919 "zcopy": true, 00:10:24.919 "get_zone_info": false, 00:10:24.919 "zone_management": false, 00:10:24.919 "zone_append": false, 00:10:24.919 "compare": false, 00:10:24.919 "compare_and_write": false, 00:10:24.919 "abort": true, 00:10:24.919 "seek_hole": false, 00:10:24.919 "seek_data": false, 00:10:24.919 "copy": true, 00:10:24.919 "nvme_iov_md": false 00:10:24.919 }, 00:10:24.919 "memory_domains": [ 00:10:24.919 { 00:10:24.919 "dma_device_id": "system", 00:10:24.919 "dma_device_type": 1 00:10:24.919 }, 00:10:24.919 { 00:10:24.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.919 "dma_device_type": 2 00:10:24.919 } 00:10:24.919 ], 00:10:24.919 "driver_specific": {} 00:10:24.919 } 00:10:24.919 ] 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.919 "name": "Existed_Raid", 00:10:24.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.919 "strip_size_kb": 64, 00:10:24.919 "state": "configuring", 00:10:24.919 "raid_level": "raid0", 00:10:24.919 "superblock": false, 00:10:24.919 "num_base_bdevs": 4, 00:10:24.919 "num_base_bdevs_discovered": 1, 00:10:24.919 "num_base_bdevs_operational": 4, 00:10:24.919 "base_bdevs_list": [ 00:10:24.919 { 00:10:24.919 "name": "BaseBdev1", 00:10:24.919 "uuid": "6bf855a8-ace9-4415-b5d6-6b8c7161c3a2", 00:10:24.919 "is_configured": true, 00:10:24.919 "data_offset": 0, 00:10:24.919 "data_size": 65536 00:10:24.919 }, 00:10:24.919 { 00:10:24.919 "name": "BaseBdev2", 00:10:24.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.919 "is_configured": false, 00:10:24.919 "data_offset": 0, 00:10:24.919 "data_size": 0 00:10:24.919 }, 00:10:24.919 { 00:10:24.919 "name": "BaseBdev3", 00:10:24.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.919 "is_configured": false, 00:10:24.919 "data_offset": 0, 00:10:24.919 "data_size": 0 00:10:24.919 }, 00:10:24.919 { 00:10:24.919 "name": "BaseBdev4", 00:10:24.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.919 "is_configured": false, 00:10:24.919 "data_offset": 0, 00:10:24.919 "data_size": 0 00:10:24.919 } 00:10:24.919 ] 00:10:24.919 }' 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.919 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.177 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.177 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.177 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.436 [2024-10-13 02:24:43.861649] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.436 [2024-10-13 02:24:43.861757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.436 [2024-10-13 02:24:43.873683] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.436 [2024-10-13 02:24:43.875984] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.436 [2024-10-13 02:24:43.876060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.436 [2024-10-13 02:24:43.876090] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.436 [2024-10-13 02:24:43.876112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.436 [2024-10-13 02:24:43.876129] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:25.436 [2024-10-13 02:24:43.876165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.436 "name": "Existed_Raid", 00:10:25.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.436 "strip_size_kb": 64, 00:10:25.436 "state": "configuring", 00:10:25.436 "raid_level": "raid0", 00:10:25.436 "superblock": false, 00:10:25.436 "num_base_bdevs": 4, 00:10:25.436 "num_base_bdevs_discovered": 1, 00:10:25.436 "num_base_bdevs_operational": 4, 00:10:25.436 "base_bdevs_list": [ 00:10:25.436 { 00:10:25.436 "name": "BaseBdev1", 00:10:25.436 "uuid": "6bf855a8-ace9-4415-b5d6-6b8c7161c3a2", 00:10:25.436 "is_configured": true, 00:10:25.436 "data_offset": 0, 00:10:25.436 "data_size": 65536 00:10:25.436 }, 00:10:25.436 { 00:10:25.436 "name": "BaseBdev2", 00:10:25.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.436 "is_configured": false, 00:10:25.436 "data_offset": 0, 00:10:25.436 "data_size": 0 00:10:25.436 }, 00:10:25.436 { 00:10:25.436 "name": "BaseBdev3", 00:10:25.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.436 "is_configured": false, 00:10:25.436 "data_offset": 0, 00:10:25.436 "data_size": 0 00:10:25.436 }, 00:10:25.436 { 00:10:25.436 "name": "BaseBdev4", 00:10:25.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.436 "is_configured": false, 00:10:25.436 "data_offset": 0, 00:10:25.436 "data_size": 0 00:10:25.436 } 00:10:25.436 ] 00:10:25.436 }' 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.436 02:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.696 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:25.696 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.696 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.696 [2024-10-13 02:24:44.343587] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.696 BaseBdev2 00:10:25.696 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.696 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:25.696 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:25.696 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:25.696 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:25.696 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:25.696 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:25.696 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:25.696 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.696 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.696 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.696 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:25.696 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.696 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.696 [ 00:10:25.696 { 00:10:25.696 "name": "BaseBdev2", 00:10:25.696 "aliases": [ 00:10:25.696 "51d5880f-2fa6-46a3-8a3a-7aa066bdaf70" 00:10:25.696 ], 00:10:25.696 "product_name": "Malloc disk", 00:10:25.696 "block_size": 512, 00:10:25.696 "num_blocks": 65536, 00:10:25.696 "uuid": "51d5880f-2fa6-46a3-8a3a-7aa066bdaf70", 00:10:25.696 "assigned_rate_limits": { 00:10:25.696 "rw_ios_per_sec": 0, 00:10:25.696 "rw_mbytes_per_sec": 0, 00:10:25.696 "r_mbytes_per_sec": 0, 00:10:25.696 "w_mbytes_per_sec": 0 00:10:25.696 }, 00:10:25.696 "claimed": true, 00:10:25.696 "claim_type": "exclusive_write", 00:10:25.956 "zoned": false, 00:10:25.956 "supported_io_types": { 00:10:25.956 "read": true, 00:10:25.956 "write": true, 00:10:25.956 "unmap": true, 00:10:25.956 "flush": true, 00:10:25.956 "reset": true, 00:10:25.956 "nvme_admin": false, 00:10:25.956 "nvme_io": false, 00:10:25.956 "nvme_io_md": false, 00:10:25.956 "write_zeroes": true, 00:10:25.956 "zcopy": true, 00:10:25.956 "get_zone_info": false, 00:10:25.956 "zone_management": false, 00:10:25.956 "zone_append": false, 00:10:25.956 "compare": false, 00:10:25.956 "compare_and_write": false, 00:10:25.956 "abort": true, 00:10:25.956 "seek_hole": false, 00:10:25.956 "seek_data": false, 00:10:25.956 "copy": true, 00:10:25.956 "nvme_iov_md": false 00:10:25.956 }, 00:10:25.956 "memory_domains": [ 00:10:25.956 { 00:10:25.956 "dma_device_id": "system", 00:10:25.956 "dma_device_type": 1 00:10:25.956 }, 00:10:25.956 { 00:10:25.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.956 "dma_device_type": 2 00:10:25.956 } 00:10:25.956 ], 00:10:25.956 "driver_specific": {} 00:10:25.956 } 00:10:25.956 ] 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.956 "name": "Existed_Raid", 00:10:25.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.956 "strip_size_kb": 64, 00:10:25.956 "state": "configuring", 00:10:25.956 "raid_level": "raid0", 00:10:25.956 "superblock": false, 00:10:25.956 "num_base_bdevs": 4, 00:10:25.956 "num_base_bdevs_discovered": 2, 00:10:25.956 "num_base_bdevs_operational": 4, 00:10:25.956 "base_bdevs_list": [ 00:10:25.956 { 00:10:25.956 "name": "BaseBdev1", 00:10:25.956 "uuid": "6bf855a8-ace9-4415-b5d6-6b8c7161c3a2", 00:10:25.956 "is_configured": true, 00:10:25.956 "data_offset": 0, 00:10:25.956 "data_size": 65536 00:10:25.956 }, 00:10:25.956 { 00:10:25.956 "name": "BaseBdev2", 00:10:25.956 "uuid": "51d5880f-2fa6-46a3-8a3a-7aa066bdaf70", 00:10:25.956 "is_configured": true, 00:10:25.956 "data_offset": 0, 00:10:25.956 "data_size": 65536 00:10:25.956 }, 00:10:25.956 { 00:10:25.956 "name": "BaseBdev3", 00:10:25.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.956 "is_configured": false, 00:10:25.956 "data_offset": 0, 00:10:25.956 "data_size": 0 00:10:25.956 }, 00:10:25.956 { 00:10:25.956 "name": "BaseBdev4", 00:10:25.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.956 "is_configured": false, 00:10:25.956 "data_offset": 0, 00:10:25.956 "data_size": 0 00:10:25.956 } 00:10:25.956 ] 00:10:25.956 }' 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.956 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.216 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:26.216 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.216 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.216 [2024-10-13 02:24:44.832152] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.216 BaseBdev3 00:10:26.216 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.216 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:26.216 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:26.216 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.216 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:26.216 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.216 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.216 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:26.216 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.216 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.216 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.216 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:26.216 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.217 [ 00:10:26.217 { 00:10:26.217 "name": "BaseBdev3", 00:10:26.217 "aliases": [ 00:10:26.217 "db6b8b02-9617-4e8c-a018-641281d32bdf" 00:10:26.217 ], 00:10:26.217 "product_name": "Malloc disk", 00:10:26.217 "block_size": 512, 00:10:26.217 "num_blocks": 65536, 00:10:26.217 "uuid": "db6b8b02-9617-4e8c-a018-641281d32bdf", 00:10:26.217 "assigned_rate_limits": { 00:10:26.217 "rw_ios_per_sec": 0, 00:10:26.217 "rw_mbytes_per_sec": 0, 00:10:26.217 "r_mbytes_per_sec": 0, 00:10:26.217 "w_mbytes_per_sec": 0 00:10:26.217 }, 00:10:26.217 "claimed": true, 00:10:26.217 "claim_type": "exclusive_write", 00:10:26.217 "zoned": false, 00:10:26.217 "supported_io_types": { 00:10:26.217 "read": true, 00:10:26.217 "write": true, 00:10:26.217 "unmap": true, 00:10:26.217 "flush": true, 00:10:26.217 "reset": true, 00:10:26.217 "nvme_admin": false, 00:10:26.217 "nvme_io": false, 00:10:26.217 "nvme_io_md": false, 00:10:26.217 "write_zeroes": true, 00:10:26.217 "zcopy": true, 00:10:26.217 "get_zone_info": false, 00:10:26.217 "zone_management": false, 00:10:26.217 "zone_append": false, 00:10:26.217 "compare": false, 00:10:26.217 "compare_and_write": false, 00:10:26.217 "abort": true, 00:10:26.217 "seek_hole": false, 00:10:26.217 "seek_data": false, 00:10:26.217 "copy": true, 00:10:26.217 "nvme_iov_md": false 00:10:26.217 }, 00:10:26.217 "memory_domains": [ 00:10:26.217 { 00:10:26.217 "dma_device_id": "system", 00:10:26.217 "dma_device_type": 1 00:10:26.217 }, 00:10:26.217 { 00:10:26.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.217 "dma_device_type": 2 00:10:26.217 } 00:10:26.217 ], 00:10:26.217 "driver_specific": {} 00:10:26.217 } 00:10:26.217 ] 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.217 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.477 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.477 "name": "Existed_Raid", 00:10:26.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.477 "strip_size_kb": 64, 00:10:26.477 "state": "configuring", 00:10:26.477 "raid_level": "raid0", 00:10:26.477 "superblock": false, 00:10:26.477 "num_base_bdevs": 4, 00:10:26.477 "num_base_bdevs_discovered": 3, 00:10:26.477 "num_base_bdevs_operational": 4, 00:10:26.477 "base_bdevs_list": [ 00:10:26.477 { 00:10:26.477 "name": "BaseBdev1", 00:10:26.477 "uuid": "6bf855a8-ace9-4415-b5d6-6b8c7161c3a2", 00:10:26.477 "is_configured": true, 00:10:26.477 "data_offset": 0, 00:10:26.477 "data_size": 65536 00:10:26.477 }, 00:10:26.477 { 00:10:26.477 "name": "BaseBdev2", 00:10:26.477 "uuid": "51d5880f-2fa6-46a3-8a3a-7aa066bdaf70", 00:10:26.477 "is_configured": true, 00:10:26.477 "data_offset": 0, 00:10:26.477 "data_size": 65536 00:10:26.477 }, 00:10:26.477 { 00:10:26.477 "name": "BaseBdev3", 00:10:26.477 "uuid": "db6b8b02-9617-4e8c-a018-641281d32bdf", 00:10:26.477 "is_configured": true, 00:10:26.477 "data_offset": 0, 00:10:26.477 "data_size": 65536 00:10:26.477 }, 00:10:26.477 { 00:10:26.477 "name": "BaseBdev4", 00:10:26.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.477 "is_configured": false, 00:10:26.477 "data_offset": 0, 00:10:26.477 "data_size": 0 00:10:26.477 } 00:10:26.477 ] 00:10:26.477 }' 00:10:26.477 02:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.477 02:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.737 [2024-10-13 02:24:45.304541] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:26.737 [2024-10-13 02:24:45.304666] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:26.737 [2024-10-13 02:24:45.304695] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:26.737 [2024-10-13 02:24:45.305059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:26.737 [2024-10-13 02:24:45.305259] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:26.737 [2024-10-13 02:24:45.305313] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:26.737 [2024-10-13 02:24:45.305615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.737 BaseBdev4 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.737 [ 00:10:26.737 { 00:10:26.737 "name": "BaseBdev4", 00:10:26.737 "aliases": [ 00:10:26.737 "a4c006bf-ac6e-4433-b72c-0fb09b31ac58" 00:10:26.737 ], 00:10:26.737 "product_name": "Malloc disk", 00:10:26.737 "block_size": 512, 00:10:26.737 "num_blocks": 65536, 00:10:26.737 "uuid": "a4c006bf-ac6e-4433-b72c-0fb09b31ac58", 00:10:26.737 "assigned_rate_limits": { 00:10:26.737 "rw_ios_per_sec": 0, 00:10:26.737 "rw_mbytes_per_sec": 0, 00:10:26.737 "r_mbytes_per_sec": 0, 00:10:26.737 "w_mbytes_per_sec": 0 00:10:26.737 }, 00:10:26.737 "claimed": true, 00:10:26.737 "claim_type": "exclusive_write", 00:10:26.737 "zoned": false, 00:10:26.737 "supported_io_types": { 00:10:26.737 "read": true, 00:10:26.737 "write": true, 00:10:26.737 "unmap": true, 00:10:26.737 "flush": true, 00:10:26.737 "reset": true, 00:10:26.737 "nvme_admin": false, 00:10:26.737 "nvme_io": false, 00:10:26.737 "nvme_io_md": false, 00:10:26.737 "write_zeroes": true, 00:10:26.737 "zcopy": true, 00:10:26.737 "get_zone_info": false, 00:10:26.737 "zone_management": false, 00:10:26.737 "zone_append": false, 00:10:26.737 "compare": false, 00:10:26.737 "compare_and_write": false, 00:10:26.737 "abort": true, 00:10:26.737 "seek_hole": false, 00:10:26.737 "seek_data": false, 00:10:26.737 "copy": true, 00:10:26.737 "nvme_iov_md": false 00:10:26.737 }, 00:10:26.737 "memory_domains": [ 00:10:26.737 { 00:10:26.737 "dma_device_id": "system", 00:10:26.737 "dma_device_type": 1 00:10:26.737 }, 00:10:26.737 { 00:10:26.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.737 "dma_device_type": 2 00:10:26.737 } 00:10:26.737 ], 00:10:26.737 "driver_specific": {} 00:10:26.737 } 00:10:26.737 ] 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.737 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.738 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.738 "name": "Existed_Raid", 00:10:26.738 "uuid": "0d6e5917-9a38-486d-a83f-167f963d8456", 00:10:26.738 "strip_size_kb": 64, 00:10:26.738 "state": "online", 00:10:26.738 "raid_level": "raid0", 00:10:26.738 "superblock": false, 00:10:26.738 "num_base_bdevs": 4, 00:10:26.738 "num_base_bdevs_discovered": 4, 00:10:26.738 "num_base_bdevs_operational": 4, 00:10:26.738 "base_bdevs_list": [ 00:10:26.738 { 00:10:26.738 "name": "BaseBdev1", 00:10:26.738 "uuid": "6bf855a8-ace9-4415-b5d6-6b8c7161c3a2", 00:10:26.738 "is_configured": true, 00:10:26.738 "data_offset": 0, 00:10:26.738 "data_size": 65536 00:10:26.738 }, 00:10:26.738 { 00:10:26.738 "name": "BaseBdev2", 00:10:26.738 "uuid": "51d5880f-2fa6-46a3-8a3a-7aa066bdaf70", 00:10:26.738 "is_configured": true, 00:10:26.738 "data_offset": 0, 00:10:26.738 "data_size": 65536 00:10:26.738 }, 00:10:26.738 { 00:10:26.738 "name": "BaseBdev3", 00:10:26.738 "uuid": "db6b8b02-9617-4e8c-a018-641281d32bdf", 00:10:26.738 "is_configured": true, 00:10:26.738 "data_offset": 0, 00:10:26.738 "data_size": 65536 00:10:26.738 }, 00:10:26.738 { 00:10:26.738 "name": "BaseBdev4", 00:10:26.738 "uuid": "a4c006bf-ac6e-4433-b72c-0fb09b31ac58", 00:10:26.738 "is_configured": true, 00:10:26.738 "data_offset": 0, 00:10:26.738 "data_size": 65536 00:10:26.738 } 00:10:26.738 ] 00:10:26.738 }' 00:10:26.738 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.738 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.308 [2024-10-13 02:24:45.812136] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:27.308 "name": "Existed_Raid", 00:10:27.308 "aliases": [ 00:10:27.308 "0d6e5917-9a38-486d-a83f-167f963d8456" 00:10:27.308 ], 00:10:27.308 "product_name": "Raid Volume", 00:10:27.308 "block_size": 512, 00:10:27.308 "num_blocks": 262144, 00:10:27.308 "uuid": "0d6e5917-9a38-486d-a83f-167f963d8456", 00:10:27.308 "assigned_rate_limits": { 00:10:27.308 "rw_ios_per_sec": 0, 00:10:27.308 "rw_mbytes_per_sec": 0, 00:10:27.308 "r_mbytes_per_sec": 0, 00:10:27.308 "w_mbytes_per_sec": 0 00:10:27.308 }, 00:10:27.308 "claimed": false, 00:10:27.308 "zoned": false, 00:10:27.308 "supported_io_types": { 00:10:27.308 "read": true, 00:10:27.308 "write": true, 00:10:27.308 "unmap": true, 00:10:27.308 "flush": true, 00:10:27.308 "reset": true, 00:10:27.308 "nvme_admin": false, 00:10:27.308 "nvme_io": false, 00:10:27.308 "nvme_io_md": false, 00:10:27.308 "write_zeroes": true, 00:10:27.308 "zcopy": false, 00:10:27.308 "get_zone_info": false, 00:10:27.308 "zone_management": false, 00:10:27.308 "zone_append": false, 00:10:27.308 "compare": false, 00:10:27.308 "compare_and_write": false, 00:10:27.308 "abort": false, 00:10:27.308 "seek_hole": false, 00:10:27.308 "seek_data": false, 00:10:27.308 "copy": false, 00:10:27.308 "nvme_iov_md": false 00:10:27.308 }, 00:10:27.308 "memory_domains": [ 00:10:27.308 { 00:10:27.308 "dma_device_id": "system", 00:10:27.308 "dma_device_type": 1 00:10:27.308 }, 00:10:27.308 { 00:10:27.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.308 "dma_device_type": 2 00:10:27.308 }, 00:10:27.308 { 00:10:27.308 "dma_device_id": "system", 00:10:27.308 "dma_device_type": 1 00:10:27.308 }, 00:10:27.308 { 00:10:27.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.308 "dma_device_type": 2 00:10:27.308 }, 00:10:27.308 { 00:10:27.308 "dma_device_id": "system", 00:10:27.308 "dma_device_type": 1 00:10:27.308 }, 00:10:27.308 { 00:10:27.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.308 "dma_device_type": 2 00:10:27.308 }, 00:10:27.308 { 00:10:27.308 "dma_device_id": "system", 00:10:27.308 "dma_device_type": 1 00:10:27.308 }, 00:10:27.308 { 00:10:27.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.308 "dma_device_type": 2 00:10:27.308 } 00:10:27.308 ], 00:10:27.308 "driver_specific": { 00:10:27.308 "raid": { 00:10:27.308 "uuid": "0d6e5917-9a38-486d-a83f-167f963d8456", 00:10:27.308 "strip_size_kb": 64, 00:10:27.308 "state": "online", 00:10:27.308 "raid_level": "raid0", 00:10:27.308 "superblock": false, 00:10:27.308 "num_base_bdevs": 4, 00:10:27.308 "num_base_bdevs_discovered": 4, 00:10:27.308 "num_base_bdevs_operational": 4, 00:10:27.308 "base_bdevs_list": [ 00:10:27.308 { 00:10:27.308 "name": "BaseBdev1", 00:10:27.308 "uuid": "6bf855a8-ace9-4415-b5d6-6b8c7161c3a2", 00:10:27.308 "is_configured": true, 00:10:27.308 "data_offset": 0, 00:10:27.308 "data_size": 65536 00:10:27.308 }, 00:10:27.308 { 00:10:27.308 "name": "BaseBdev2", 00:10:27.308 "uuid": "51d5880f-2fa6-46a3-8a3a-7aa066bdaf70", 00:10:27.308 "is_configured": true, 00:10:27.308 "data_offset": 0, 00:10:27.308 "data_size": 65536 00:10:27.308 }, 00:10:27.308 { 00:10:27.308 "name": "BaseBdev3", 00:10:27.308 "uuid": "db6b8b02-9617-4e8c-a018-641281d32bdf", 00:10:27.308 "is_configured": true, 00:10:27.308 "data_offset": 0, 00:10:27.308 "data_size": 65536 00:10:27.308 }, 00:10:27.308 { 00:10:27.308 "name": "BaseBdev4", 00:10:27.308 "uuid": "a4c006bf-ac6e-4433-b72c-0fb09b31ac58", 00:10:27.308 "is_configured": true, 00:10:27.308 "data_offset": 0, 00:10:27.308 "data_size": 65536 00:10:27.308 } 00:10:27.308 ] 00:10:27.308 } 00:10:27.308 } 00:10:27.308 }' 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:27.308 BaseBdev2 00:10:27.308 BaseBdev3 00:10:27.308 BaseBdev4' 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.308 02:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.568 02:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.568 [2024-10-13 02:24:46.075361] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:27.568 [2024-10-13 02:24:46.075402] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.568 [2024-10-13 02:24:46.075468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.568 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.568 "name": "Existed_Raid", 00:10:27.568 "uuid": "0d6e5917-9a38-486d-a83f-167f963d8456", 00:10:27.568 "strip_size_kb": 64, 00:10:27.568 "state": "offline", 00:10:27.568 "raid_level": "raid0", 00:10:27.568 "superblock": false, 00:10:27.568 "num_base_bdevs": 4, 00:10:27.568 "num_base_bdevs_discovered": 3, 00:10:27.568 "num_base_bdevs_operational": 3, 00:10:27.569 "base_bdevs_list": [ 00:10:27.569 { 00:10:27.569 "name": null, 00:10:27.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.569 "is_configured": false, 00:10:27.569 "data_offset": 0, 00:10:27.569 "data_size": 65536 00:10:27.569 }, 00:10:27.569 { 00:10:27.569 "name": "BaseBdev2", 00:10:27.569 "uuid": "51d5880f-2fa6-46a3-8a3a-7aa066bdaf70", 00:10:27.569 "is_configured": true, 00:10:27.569 "data_offset": 0, 00:10:27.569 "data_size": 65536 00:10:27.569 }, 00:10:27.569 { 00:10:27.569 "name": "BaseBdev3", 00:10:27.569 "uuid": "db6b8b02-9617-4e8c-a018-641281d32bdf", 00:10:27.569 "is_configured": true, 00:10:27.569 "data_offset": 0, 00:10:27.569 "data_size": 65536 00:10:27.569 }, 00:10:27.569 { 00:10:27.569 "name": "BaseBdev4", 00:10:27.569 "uuid": "a4c006bf-ac6e-4433-b72c-0fb09b31ac58", 00:10:27.569 "is_configured": true, 00:10:27.569 "data_offset": 0, 00:10:27.569 "data_size": 65536 00:10:27.569 } 00:10:27.569 ] 00:10:27.569 }' 00:10:27.569 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.569 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.138 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.139 [2024-10-13 02:24:46.563720] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.139 [2024-10-13 02:24:46.644497] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.139 [2024-10-13 02:24:46.725057] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:28.139 [2024-10-13 02:24:46.725119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.139 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.399 BaseBdev2 00:10:28.399 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.399 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:28.399 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:28.399 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.399 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:28.399 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.399 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.399 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:28.399 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.399 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.399 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.399 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:28.399 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.399 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.399 [ 00:10:28.399 { 00:10:28.399 "name": "BaseBdev2", 00:10:28.399 "aliases": [ 00:10:28.399 "07aeb657-fe45-495d-aaf7-2025fab4f2fa" 00:10:28.399 ], 00:10:28.399 "product_name": "Malloc disk", 00:10:28.399 "block_size": 512, 00:10:28.399 "num_blocks": 65536, 00:10:28.399 "uuid": "07aeb657-fe45-495d-aaf7-2025fab4f2fa", 00:10:28.399 "assigned_rate_limits": { 00:10:28.400 "rw_ios_per_sec": 0, 00:10:28.400 "rw_mbytes_per_sec": 0, 00:10:28.400 "r_mbytes_per_sec": 0, 00:10:28.400 "w_mbytes_per_sec": 0 00:10:28.400 }, 00:10:28.400 "claimed": false, 00:10:28.400 "zoned": false, 00:10:28.400 "supported_io_types": { 00:10:28.400 "read": true, 00:10:28.400 "write": true, 00:10:28.400 "unmap": true, 00:10:28.400 "flush": true, 00:10:28.400 "reset": true, 00:10:28.400 "nvme_admin": false, 00:10:28.400 "nvme_io": false, 00:10:28.400 "nvme_io_md": false, 00:10:28.400 "write_zeroes": true, 00:10:28.400 "zcopy": true, 00:10:28.400 "get_zone_info": false, 00:10:28.400 "zone_management": false, 00:10:28.400 "zone_append": false, 00:10:28.400 "compare": false, 00:10:28.400 "compare_and_write": false, 00:10:28.400 "abort": true, 00:10:28.400 "seek_hole": false, 00:10:28.400 "seek_data": false, 00:10:28.400 "copy": true, 00:10:28.400 "nvme_iov_md": false 00:10:28.400 }, 00:10:28.400 "memory_domains": [ 00:10:28.400 { 00:10:28.400 "dma_device_id": "system", 00:10:28.400 "dma_device_type": 1 00:10:28.400 }, 00:10:28.400 { 00:10:28.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.400 "dma_device_type": 2 00:10:28.400 } 00:10:28.400 ], 00:10:28.400 "driver_specific": {} 00:10:28.400 } 00:10:28.400 ] 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.400 BaseBdev3 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.400 [ 00:10:28.400 { 00:10:28.400 "name": "BaseBdev3", 00:10:28.400 "aliases": [ 00:10:28.400 "8ba02c6e-8504-4a9d-bb0c-683c6325e7da" 00:10:28.400 ], 00:10:28.400 "product_name": "Malloc disk", 00:10:28.400 "block_size": 512, 00:10:28.400 "num_blocks": 65536, 00:10:28.400 "uuid": "8ba02c6e-8504-4a9d-bb0c-683c6325e7da", 00:10:28.400 "assigned_rate_limits": { 00:10:28.400 "rw_ios_per_sec": 0, 00:10:28.400 "rw_mbytes_per_sec": 0, 00:10:28.400 "r_mbytes_per_sec": 0, 00:10:28.400 "w_mbytes_per_sec": 0 00:10:28.400 }, 00:10:28.400 "claimed": false, 00:10:28.400 "zoned": false, 00:10:28.400 "supported_io_types": { 00:10:28.400 "read": true, 00:10:28.400 "write": true, 00:10:28.400 "unmap": true, 00:10:28.400 "flush": true, 00:10:28.400 "reset": true, 00:10:28.400 "nvme_admin": false, 00:10:28.400 "nvme_io": false, 00:10:28.400 "nvme_io_md": false, 00:10:28.400 "write_zeroes": true, 00:10:28.400 "zcopy": true, 00:10:28.400 "get_zone_info": false, 00:10:28.400 "zone_management": false, 00:10:28.400 "zone_append": false, 00:10:28.400 "compare": false, 00:10:28.400 "compare_and_write": false, 00:10:28.400 "abort": true, 00:10:28.400 "seek_hole": false, 00:10:28.400 "seek_data": false, 00:10:28.400 "copy": true, 00:10:28.400 "nvme_iov_md": false 00:10:28.400 }, 00:10:28.400 "memory_domains": [ 00:10:28.400 { 00:10:28.400 "dma_device_id": "system", 00:10:28.400 "dma_device_type": 1 00:10:28.400 }, 00:10:28.400 { 00:10:28.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.400 "dma_device_type": 2 00:10:28.400 } 00:10:28.400 ], 00:10:28.400 "driver_specific": {} 00:10:28.400 } 00:10:28.400 ] 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.400 BaseBdev4 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.400 [ 00:10:28.400 { 00:10:28.400 "name": "BaseBdev4", 00:10:28.400 "aliases": [ 00:10:28.400 "abb4089f-5e7e-4905-b948-56cdba5d069a" 00:10:28.400 ], 00:10:28.400 "product_name": "Malloc disk", 00:10:28.400 "block_size": 512, 00:10:28.400 "num_blocks": 65536, 00:10:28.400 "uuid": "abb4089f-5e7e-4905-b948-56cdba5d069a", 00:10:28.400 "assigned_rate_limits": { 00:10:28.400 "rw_ios_per_sec": 0, 00:10:28.400 "rw_mbytes_per_sec": 0, 00:10:28.400 "r_mbytes_per_sec": 0, 00:10:28.400 "w_mbytes_per_sec": 0 00:10:28.400 }, 00:10:28.400 "claimed": false, 00:10:28.400 "zoned": false, 00:10:28.400 "supported_io_types": { 00:10:28.400 "read": true, 00:10:28.400 "write": true, 00:10:28.400 "unmap": true, 00:10:28.400 "flush": true, 00:10:28.400 "reset": true, 00:10:28.400 "nvme_admin": false, 00:10:28.400 "nvme_io": false, 00:10:28.400 "nvme_io_md": false, 00:10:28.400 "write_zeroes": true, 00:10:28.400 "zcopy": true, 00:10:28.400 "get_zone_info": false, 00:10:28.400 "zone_management": false, 00:10:28.400 "zone_append": false, 00:10:28.400 "compare": false, 00:10:28.400 "compare_and_write": false, 00:10:28.400 "abort": true, 00:10:28.400 "seek_hole": false, 00:10:28.400 "seek_data": false, 00:10:28.400 "copy": true, 00:10:28.400 "nvme_iov_md": false 00:10:28.400 }, 00:10:28.400 "memory_domains": [ 00:10:28.400 { 00:10:28.400 "dma_device_id": "system", 00:10:28.400 "dma_device_type": 1 00:10:28.400 }, 00:10:28.400 { 00:10:28.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.400 "dma_device_type": 2 00:10:28.400 } 00:10:28.400 ], 00:10:28.400 "driver_specific": {} 00:10:28.400 } 00:10:28.400 ] 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.400 [2024-10-13 02:24:46.987408] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:28.400 [2024-10-13 02:24:46.987461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:28.400 [2024-10-13 02:24:46.987502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.400 [2024-10-13 02:24:46.989639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.400 [2024-10-13 02:24:46.989692] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:28.400 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.401 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.401 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.401 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.401 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.401 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.401 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.401 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.401 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.401 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.401 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.401 02:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.401 02:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.401 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.401 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.401 "name": "Existed_Raid", 00:10:28.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.401 "strip_size_kb": 64, 00:10:28.401 "state": "configuring", 00:10:28.401 "raid_level": "raid0", 00:10:28.401 "superblock": false, 00:10:28.401 "num_base_bdevs": 4, 00:10:28.401 "num_base_bdevs_discovered": 3, 00:10:28.401 "num_base_bdevs_operational": 4, 00:10:28.401 "base_bdevs_list": [ 00:10:28.401 { 00:10:28.401 "name": "BaseBdev1", 00:10:28.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.401 "is_configured": false, 00:10:28.401 "data_offset": 0, 00:10:28.401 "data_size": 0 00:10:28.401 }, 00:10:28.401 { 00:10:28.401 "name": "BaseBdev2", 00:10:28.401 "uuid": "07aeb657-fe45-495d-aaf7-2025fab4f2fa", 00:10:28.401 "is_configured": true, 00:10:28.401 "data_offset": 0, 00:10:28.401 "data_size": 65536 00:10:28.401 }, 00:10:28.401 { 00:10:28.401 "name": "BaseBdev3", 00:10:28.401 "uuid": "8ba02c6e-8504-4a9d-bb0c-683c6325e7da", 00:10:28.401 "is_configured": true, 00:10:28.401 "data_offset": 0, 00:10:28.401 "data_size": 65536 00:10:28.401 }, 00:10:28.401 { 00:10:28.401 "name": "BaseBdev4", 00:10:28.401 "uuid": "abb4089f-5e7e-4905-b948-56cdba5d069a", 00:10:28.401 "is_configured": true, 00:10:28.401 "data_offset": 0, 00:10:28.401 "data_size": 65536 00:10:28.401 } 00:10:28.401 ] 00:10:28.401 }' 00:10:28.401 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.401 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.970 [2024-10-13 02:24:47.438715] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.970 "name": "Existed_Raid", 00:10:28.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.970 "strip_size_kb": 64, 00:10:28.970 "state": "configuring", 00:10:28.970 "raid_level": "raid0", 00:10:28.970 "superblock": false, 00:10:28.970 "num_base_bdevs": 4, 00:10:28.970 "num_base_bdevs_discovered": 2, 00:10:28.970 "num_base_bdevs_operational": 4, 00:10:28.970 "base_bdevs_list": [ 00:10:28.970 { 00:10:28.970 "name": "BaseBdev1", 00:10:28.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.970 "is_configured": false, 00:10:28.970 "data_offset": 0, 00:10:28.970 "data_size": 0 00:10:28.970 }, 00:10:28.970 { 00:10:28.970 "name": null, 00:10:28.970 "uuid": "07aeb657-fe45-495d-aaf7-2025fab4f2fa", 00:10:28.970 "is_configured": false, 00:10:28.970 "data_offset": 0, 00:10:28.970 "data_size": 65536 00:10:28.970 }, 00:10:28.970 { 00:10:28.970 "name": "BaseBdev3", 00:10:28.970 "uuid": "8ba02c6e-8504-4a9d-bb0c-683c6325e7da", 00:10:28.970 "is_configured": true, 00:10:28.970 "data_offset": 0, 00:10:28.970 "data_size": 65536 00:10:28.970 }, 00:10:28.970 { 00:10:28.970 "name": "BaseBdev4", 00:10:28.970 "uuid": "abb4089f-5e7e-4905-b948-56cdba5d069a", 00:10:28.970 "is_configured": true, 00:10:28.970 "data_offset": 0, 00:10:28.970 "data_size": 65536 00:10:28.970 } 00:10:28.970 ] 00:10:28.970 }' 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.970 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.230 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.230 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:29.230 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.230 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.230 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.230 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:29.230 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:29.230 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.230 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.496 [2024-10-13 02:24:47.930768] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.496 BaseBdev1 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.496 [ 00:10:29.496 { 00:10:29.496 "name": "BaseBdev1", 00:10:29.496 "aliases": [ 00:10:29.496 "3cecf669-d56c-424d-8b92-6de607862fcb" 00:10:29.496 ], 00:10:29.496 "product_name": "Malloc disk", 00:10:29.496 "block_size": 512, 00:10:29.496 "num_blocks": 65536, 00:10:29.496 "uuid": "3cecf669-d56c-424d-8b92-6de607862fcb", 00:10:29.496 "assigned_rate_limits": { 00:10:29.496 "rw_ios_per_sec": 0, 00:10:29.496 "rw_mbytes_per_sec": 0, 00:10:29.496 "r_mbytes_per_sec": 0, 00:10:29.496 "w_mbytes_per_sec": 0 00:10:29.496 }, 00:10:29.496 "claimed": true, 00:10:29.496 "claim_type": "exclusive_write", 00:10:29.496 "zoned": false, 00:10:29.496 "supported_io_types": { 00:10:29.496 "read": true, 00:10:29.496 "write": true, 00:10:29.496 "unmap": true, 00:10:29.496 "flush": true, 00:10:29.496 "reset": true, 00:10:29.496 "nvme_admin": false, 00:10:29.496 "nvme_io": false, 00:10:29.496 "nvme_io_md": false, 00:10:29.496 "write_zeroes": true, 00:10:29.496 "zcopy": true, 00:10:29.496 "get_zone_info": false, 00:10:29.496 "zone_management": false, 00:10:29.496 "zone_append": false, 00:10:29.496 "compare": false, 00:10:29.496 "compare_and_write": false, 00:10:29.496 "abort": true, 00:10:29.496 "seek_hole": false, 00:10:29.496 "seek_data": false, 00:10:29.496 "copy": true, 00:10:29.496 "nvme_iov_md": false 00:10:29.496 }, 00:10:29.496 "memory_domains": [ 00:10:29.496 { 00:10:29.496 "dma_device_id": "system", 00:10:29.496 "dma_device_type": 1 00:10:29.496 }, 00:10:29.496 { 00:10:29.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.496 "dma_device_type": 2 00:10:29.496 } 00:10:29.496 ], 00:10:29.496 "driver_specific": {} 00:10:29.496 } 00:10:29.496 ] 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.496 02:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.496 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.496 "name": "Existed_Raid", 00:10:29.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.496 "strip_size_kb": 64, 00:10:29.496 "state": "configuring", 00:10:29.496 "raid_level": "raid0", 00:10:29.496 "superblock": false, 00:10:29.496 "num_base_bdevs": 4, 00:10:29.496 "num_base_bdevs_discovered": 3, 00:10:29.496 "num_base_bdevs_operational": 4, 00:10:29.496 "base_bdevs_list": [ 00:10:29.496 { 00:10:29.496 "name": "BaseBdev1", 00:10:29.496 "uuid": "3cecf669-d56c-424d-8b92-6de607862fcb", 00:10:29.496 "is_configured": true, 00:10:29.496 "data_offset": 0, 00:10:29.496 "data_size": 65536 00:10:29.496 }, 00:10:29.496 { 00:10:29.497 "name": null, 00:10:29.497 "uuid": "07aeb657-fe45-495d-aaf7-2025fab4f2fa", 00:10:29.497 "is_configured": false, 00:10:29.497 "data_offset": 0, 00:10:29.497 "data_size": 65536 00:10:29.497 }, 00:10:29.497 { 00:10:29.497 "name": "BaseBdev3", 00:10:29.497 "uuid": "8ba02c6e-8504-4a9d-bb0c-683c6325e7da", 00:10:29.497 "is_configured": true, 00:10:29.497 "data_offset": 0, 00:10:29.497 "data_size": 65536 00:10:29.497 }, 00:10:29.497 { 00:10:29.497 "name": "BaseBdev4", 00:10:29.497 "uuid": "abb4089f-5e7e-4905-b948-56cdba5d069a", 00:10:29.497 "is_configured": true, 00:10:29.497 "data_offset": 0, 00:10:29.497 "data_size": 65536 00:10:29.497 } 00:10:29.497 ] 00:10:29.497 }' 00:10:29.497 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.497 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.755 [2024-10-13 02:24:48.421989] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.755 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.015 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.015 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.015 "name": "Existed_Raid", 00:10:30.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.015 "strip_size_kb": 64, 00:10:30.015 "state": "configuring", 00:10:30.015 "raid_level": "raid0", 00:10:30.015 "superblock": false, 00:10:30.015 "num_base_bdevs": 4, 00:10:30.015 "num_base_bdevs_discovered": 2, 00:10:30.015 "num_base_bdevs_operational": 4, 00:10:30.015 "base_bdevs_list": [ 00:10:30.015 { 00:10:30.015 "name": "BaseBdev1", 00:10:30.015 "uuid": "3cecf669-d56c-424d-8b92-6de607862fcb", 00:10:30.015 "is_configured": true, 00:10:30.015 "data_offset": 0, 00:10:30.015 "data_size": 65536 00:10:30.015 }, 00:10:30.015 { 00:10:30.015 "name": null, 00:10:30.015 "uuid": "07aeb657-fe45-495d-aaf7-2025fab4f2fa", 00:10:30.015 "is_configured": false, 00:10:30.015 "data_offset": 0, 00:10:30.015 "data_size": 65536 00:10:30.015 }, 00:10:30.015 { 00:10:30.015 "name": null, 00:10:30.015 "uuid": "8ba02c6e-8504-4a9d-bb0c-683c6325e7da", 00:10:30.015 "is_configured": false, 00:10:30.015 "data_offset": 0, 00:10:30.015 "data_size": 65536 00:10:30.015 }, 00:10:30.015 { 00:10:30.015 "name": "BaseBdev4", 00:10:30.015 "uuid": "abb4089f-5e7e-4905-b948-56cdba5d069a", 00:10:30.015 "is_configured": true, 00:10:30.015 "data_offset": 0, 00:10:30.015 "data_size": 65536 00:10:30.015 } 00:10:30.015 ] 00:10:30.015 }' 00:10:30.015 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.015 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.276 [2024-10-13 02:24:48.885242] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.276 "name": "Existed_Raid", 00:10:30.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.276 "strip_size_kb": 64, 00:10:30.276 "state": "configuring", 00:10:30.276 "raid_level": "raid0", 00:10:30.276 "superblock": false, 00:10:30.276 "num_base_bdevs": 4, 00:10:30.276 "num_base_bdevs_discovered": 3, 00:10:30.276 "num_base_bdevs_operational": 4, 00:10:30.276 "base_bdevs_list": [ 00:10:30.276 { 00:10:30.276 "name": "BaseBdev1", 00:10:30.276 "uuid": "3cecf669-d56c-424d-8b92-6de607862fcb", 00:10:30.276 "is_configured": true, 00:10:30.276 "data_offset": 0, 00:10:30.276 "data_size": 65536 00:10:30.276 }, 00:10:30.276 { 00:10:30.276 "name": null, 00:10:30.276 "uuid": "07aeb657-fe45-495d-aaf7-2025fab4f2fa", 00:10:30.276 "is_configured": false, 00:10:30.276 "data_offset": 0, 00:10:30.276 "data_size": 65536 00:10:30.276 }, 00:10:30.276 { 00:10:30.276 "name": "BaseBdev3", 00:10:30.276 "uuid": "8ba02c6e-8504-4a9d-bb0c-683c6325e7da", 00:10:30.276 "is_configured": true, 00:10:30.276 "data_offset": 0, 00:10:30.276 "data_size": 65536 00:10:30.276 }, 00:10:30.276 { 00:10:30.276 "name": "BaseBdev4", 00:10:30.276 "uuid": "abb4089f-5e7e-4905-b948-56cdba5d069a", 00:10:30.276 "is_configured": true, 00:10:30.276 "data_offset": 0, 00:10:30.276 "data_size": 65536 00:10:30.276 } 00:10:30.276 ] 00:10:30.276 }' 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.276 02:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.846 [2024-10-13 02:24:49.392385] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.846 "name": "Existed_Raid", 00:10:30.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.846 "strip_size_kb": 64, 00:10:30.846 "state": "configuring", 00:10:30.846 "raid_level": "raid0", 00:10:30.846 "superblock": false, 00:10:30.846 "num_base_bdevs": 4, 00:10:30.846 "num_base_bdevs_discovered": 2, 00:10:30.846 "num_base_bdevs_operational": 4, 00:10:30.846 "base_bdevs_list": [ 00:10:30.846 { 00:10:30.846 "name": null, 00:10:30.846 "uuid": "3cecf669-d56c-424d-8b92-6de607862fcb", 00:10:30.846 "is_configured": false, 00:10:30.846 "data_offset": 0, 00:10:30.846 "data_size": 65536 00:10:30.846 }, 00:10:30.846 { 00:10:30.846 "name": null, 00:10:30.846 "uuid": "07aeb657-fe45-495d-aaf7-2025fab4f2fa", 00:10:30.846 "is_configured": false, 00:10:30.846 "data_offset": 0, 00:10:30.846 "data_size": 65536 00:10:30.846 }, 00:10:30.846 { 00:10:30.846 "name": "BaseBdev3", 00:10:30.846 "uuid": "8ba02c6e-8504-4a9d-bb0c-683c6325e7da", 00:10:30.846 "is_configured": true, 00:10:30.846 "data_offset": 0, 00:10:30.846 "data_size": 65536 00:10:30.846 }, 00:10:30.846 { 00:10:30.846 "name": "BaseBdev4", 00:10:30.846 "uuid": "abb4089f-5e7e-4905-b948-56cdba5d069a", 00:10:30.846 "is_configured": true, 00:10:30.846 "data_offset": 0, 00:10:30.846 "data_size": 65536 00:10:30.846 } 00:10:30.846 ] 00:10:30.846 }' 00:10:30.846 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.847 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.416 [2024-10-13 02:24:49.867726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.416 "name": "Existed_Raid", 00:10:31.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.416 "strip_size_kb": 64, 00:10:31.416 "state": "configuring", 00:10:31.416 "raid_level": "raid0", 00:10:31.416 "superblock": false, 00:10:31.416 "num_base_bdevs": 4, 00:10:31.416 "num_base_bdevs_discovered": 3, 00:10:31.416 "num_base_bdevs_operational": 4, 00:10:31.416 "base_bdevs_list": [ 00:10:31.416 { 00:10:31.416 "name": null, 00:10:31.416 "uuid": "3cecf669-d56c-424d-8b92-6de607862fcb", 00:10:31.416 "is_configured": false, 00:10:31.416 "data_offset": 0, 00:10:31.416 "data_size": 65536 00:10:31.416 }, 00:10:31.416 { 00:10:31.416 "name": "BaseBdev2", 00:10:31.416 "uuid": "07aeb657-fe45-495d-aaf7-2025fab4f2fa", 00:10:31.416 "is_configured": true, 00:10:31.416 "data_offset": 0, 00:10:31.416 "data_size": 65536 00:10:31.416 }, 00:10:31.416 { 00:10:31.416 "name": "BaseBdev3", 00:10:31.416 "uuid": "8ba02c6e-8504-4a9d-bb0c-683c6325e7da", 00:10:31.416 "is_configured": true, 00:10:31.416 "data_offset": 0, 00:10:31.416 "data_size": 65536 00:10:31.416 }, 00:10:31.416 { 00:10:31.416 "name": "BaseBdev4", 00:10:31.416 "uuid": "abb4089f-5e7e-4905-b948-56cdba5d069a", 00:10:31.416 "is_configured": true, 00:10:31.416 "data_offset": 0, 00:10:31.416 "data_size": 65536 00:10:31.416 } 00:10:31.416 ] 00:10:31.416 }' 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.416 02:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.676 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.676 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:31.676 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.676 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.676 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.676 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:31.676 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.676 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.676 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.676 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:31.676 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.936 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3cecf669-d56c-424d-8b92-6de607862fcb 00:10:31.936 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.936 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.936 [2024-10-13 02:24:50.387812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:31.936 [2024-10-13 02:24:50.387884] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:31.936 [2024-10-13 02:24:50.387894] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:31.936 [2024-10-13 02:24:50.388198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:31.936 [2024-10-13 02:24:50.388332] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:31.936 [2024-10-13 02:24:50.388348] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:31.936 [2024-10-13 02:24:50.388538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.936 NewBaseBdev 00:10:31.936 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.936 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:31.936 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:31.936 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.936 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:31.936 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.936 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.936 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:31.936 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.936 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.936 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.936 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:31.936 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.936 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.936 [ 00:10:31.936 { 00:10:31.936 "name": "NewBaseBdev", 00:10:31.936 "aliases": [ 00:10:31.936 "3cecf669-d56c-424d-8b92-6de607862fcb" 00:10:31.936 ], 00:10:31.936 "product_name": "Malloc disk", 00:10:31.936 "block_size": 512, 00:10:31.936 "num_blocks": 65536, 00:10:31.936 "uuid": "3cecf669-d56c-424d-8b92-6de607862fcb", 00:10:31.936 "assigned_rate_limits": { 00:10:31.936 "rw_ios_per_sec": 0, 00:10:31.936 "rw_mbytes_per_sec": 0, 00:10:31.936 "r_mbytes_per_sec": 0, 00:10:31.936 "w_mbytes_per_sec": 0 00:10:31.936 }, 00:10:31.936 "claimed": true, 00:10:31.936 "claim_type": "exclusive_write", 00:10:31.936 "zoned": false, 00:10:31.937 "supported_io_types": { 00:10:31.937 "read": true, 00:10:31.937 "write": true, 00:10:31.937 "unmap": true, 00:10:31.937 "flush": true, 00:10:31.937 "reset": true, 00:10:31.937 "nvme_admin": false, 00:10:31.937 "nvme_io": false, 00:10:31.937 "nvme_io_md": false, 00:10:31.937 "write_zeroes": true, 00:10:31.937 "zcopy": true, 00:10:31.937 "get_zone_info": false, 00:10:31.937 "zone_management": false, 00:10:31.937 "zone_append": false, 00:10:31.937 "compare": false, 00:10:31.937 "compare_and_write": false, 00:10:31.937 "abort": true, 00:10:31.937 "seek_hole": false, 00:10:31.937 "seek_data": false, 00:10:31.937 "copy": true, 00:10:31.937 "nvme_iov_md": false 00:10:31.937 }, 00:10:31.937 "memory_domains": [ 00:10:31.937 { 00:10:31.937 "dma_device_id": "system", 00:10:31.937 "dma_device_type": 1 00:10:31.937 }, 00:10:31.937 { 00:10:31.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.937 "dma_device_type": 2 00:10:31.937 } 00:10:31.937 ], 00:10:31.937 "driver_specific": {} 00:10:31.937 } 00:10:31.937 ] 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.937 "name": "Existed_Raid", 00:10:31.937 "uuid": "791e5565-4503-4c06-9910-1f90dc8b7dce", 00:10:31.937 "strip_size_kb": 64, 00:10:31.937 "state": "online", 00:10:31.937 "raid_level": "raid0", 00:10:31.937 "superblock": false, 00:10:31.937 "num_base_bdevs": 4, 00:10:31.937 "num_base_bdevs_discovered": 4, 00:10:31.937 "num_base_bdevs_operational": 4, 00:10:31.937 "base_bdevs_list": [ 00:10:31.937 { 00:10:31.937 "name": "NewBaseBdev", 00:10:31.937 "uuid": "3cecf669-d56c-424d-8b92-6de607862fcb", 00:10:31.937 "is_configured": true, 00:10:31.937 "data_offset": 0, 00:10:31.937 "data_size": 65536 00:10:31.937 }, 00:10:31.937 { 00:10:31.937 "name": "BaseBdev2", 00:10:31.937 "uuid": "07aeb657-fe45-495d-aaf7-2025fab4f2fa", 00:10:31.937 "is_configured": true, 00:10:31.937 "data_offset": 0, 00:10:31.937 "data_size": 65536 00:10:31.937 }, 00:10:31.937 { 00:10:31.937 "name": "BaseBdev3", 00:10:31.937 "uuid": "8ba02c6e-8504-4a9d-bb0c-683c6325e7da", 00:10:31.937 "is_configured": true, 00:10:31.937 "data_offset": 0, 00:10:31.937 "data_size": 65536 00:10:31.937 }, 00:10:31.937 { 00:10:31.937 "name": "BaseBdev4", 00:10:31.937 "uuid": "abb4089f-5e7e-4905-b948-56cdba5d069a", 00:10:31.937 "is_configured": true, 00:10:31.937 "data_offset": 0, 00:10:31.937 "data_size": 65536 00:10:31.937 } 00:10:31.937 ] 00:10:31.937 }' 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.937 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.506 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:32.506 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:32.506 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:32.506 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:32.506 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:32.506 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:32.506 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:32.506 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:32.506 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.506 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.506 [2024-10-13 02:24:50.899421] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.506 02:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.506 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:32.506 "name": "Existed_Raid", 00:10:32.506 "aliases": [ 00:10:32.506 "791e5565-4503-4c06-9910-1f90dc8b7dce" 00:10:32.506 ], 00:10:32.506 "product_name": "Raid Volume", 00:10:32.506 "block_size": 512, 00:10:32.506 "num_blocks": 262144, 00:10:32.506 "uuid": "791e5565-4503-4c06-9910-1f90dc8b7dce", 00:10:32.506 "assigned_rate_limits": { 00:10:32.506 "rw_ios_per_sec": 0, 00:10:32.506 "rw_mbytes_per_sec": 0, 00:10:32.506 "r_mbytes_per_sec": 0, 00:10:32.506 "w_mbytes_per_sec": 0 00:10:32.506 }, 00:10:32.506 "claimed": false, 00:10:32.506 "zoned": false, 00:10:32.506 "supported_io_types": { 00:10:32.506 "read": true, 00:10:32.506 "write": true, 00:10:32.506 "unmap": true, 00:10:32.506 "flush": true, 00:10:32.506 "reset": true, 00:10:32.506 "nvme_admin": false, 00:10:32.506 "nvme_io": false, 00:10:32.506 "nvme_io_md": false, 00:10:32.506 "write_zeroes": true, 00:10:32.506 "zcopy": false, 00:10:32.506 "get_zone_info": false, 00:10:32.506 "zone_management": false, 00:10:32.506 "zone_append": false, 00:10:32.506 "compare": false, 00:10:32.506 "compare_and_write": false, 00:10:32.506 "abort": false, 00:10:32.506 "seek_hole": false, 00:10:32.506 "seek_data": false, 00:10:32.506 "copy": false, 00:10:32.506 "nvme_iov_md": false 00:10:32.506 }, 00:10:32.506 "memory_domains": [ 00:10:32.506 { 00:10:32.506 "dma_device_id": "system", 00:10:32.506 "dma_device_type": 1 00:10:32.506 }, 00:10:32.506 { 00:10:32.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.506 "dma_device_type": 2 00:10:32.506 }, 00:10:32.506 { 00:10:32.506 "dma_device_id": "system", 00:10:32.506 "dma_device_type": 1 00:10:32.506 }, 00:10:32.506 { 00:10:32.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.506 "dma_device_type": 2 00:10:32.506 }, 00:10:32.506 { 00:10:32.506 "dma_device_id": "system", 00:10:32.506 "dma_device_type": 1 00:10:32.506 }, 00:10:32.506 { 00:10:32.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.506 "dma_device_type": 2 00:10:32.506 }, 00:10:32.506 { 00:10:32.506 "dma_device_id": "system", 00:10:32.506 "dma_device_type": 1 00:10:32.506 }, 00:10:32.506 { 00:10:32.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.506 "dma_device_type": 2 00:10:32.506 } 00:10:32.506 ], 00:10:32.506 "driver_specific": { 00:10:32.506 "raid": { 00:10:32.506 "uuid": "791e5565-4503-4c06-9910-1f90dc8b7dce", 00:10:32.506 "strip_size_kb": 64, 00:10:32.506 "state": "online", 00:10:32.506 "raid_level": "raid0", 00:10:32.506 "superblock": false, 00:10:32.506 "num_base_bdevs": 4, 00:10:32.506 "num_base_bdevs_discovered": 4, 00:10:32.506 "num_base_bdevs_operational": 4, 00:10:32.506 "base_bdevs_list": [ 00:10:32.506 { 00:10:32.506 "name": "NewBaseBdev", 00:10:32.506 "uuid": "3cecf669-d56c-424d-8b92-6de607862fcb", 00:10:32.507 "is_configured": true, 00:10:32.507 "data_offset": 0, 00:10:32.507 "data_size": 65536 00:10:32.507 }, 00:10:32.507 { 00:10:32.507 "name": "BaseBdev2", 00:10:32.507 "uuid": "07aeb657-fe45-495d-aaf7-2025fab4f2fa", 00:10:32.507 "is_configured": true, 00:10:32.507 "data_offset": 0, 00:10:32.507 "data_size": 65536 00:10:32.507 }, 00:10:32.507 { 00:10:32.507 "name": "BaseBdev3", 00:10:32.507 "uuid": "8ba02c6e-8504-4a9d-bb0c-683c6325e7da", 00:10:32.507 "is_configured": true, 00:10:32.507 "data_offset": 0, 00:10:32.507 "data_size": 65536 00:10:32.507 }, 00:10:32.507 { 00:10:32.507 "name": "BaseBdev4", 00:10:32.507 "uuid": "abb4089f-5e7e-4905-b948-56cdba5d069a", 00:10:32.507 "is_configured": true, 00:10:32.507 "data_offset": 0, 00:10:32.507 "data_size": 65536 00:10:32.507 } 00:10:32.507 ] 00:10:32.507 } 00:10:32.507 } 00:10:32.507 }' 00:10:32.507 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:32.507 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:32.507 BaseBdev2 00:10:32.507 BaseBdev3 00:10:32.507 BaseBdev4' 00:10:32.507 02:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.507 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.768 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.768 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.768 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.768 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:32.768 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.768 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.768 [2024-10-13 02:24:51.235050] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.768 [2024-10-13 02:24:51.235088] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.768 [2024-10-13 02:24:51.235171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.768 [2024-10-13 02:24:51.235249] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.768 [2024-10-13 02:24:51.235260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:32.768 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.768 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80244 00:10:32.768 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80244 ']' 00:10:32.768 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80244 00:10:32.768 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:32.768 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:32.768 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80244 00:10:32.768 killing process with pid 80244 00:10:32.768 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:32.768 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:32.768 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80244' 00:10:32.768 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80244 00:10:32.768 [2024-10-13 02:24:51.273691] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:32.768 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80244 00:10:32.768 [2024-10-13 02:24:51.353815] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:33.339 00:10:33.339 real 0m9.801s 00:10:33.339 user 0m16.359s 00:10:33.339 sys 0m2.165s 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.339 ************************************ 00:10:33.339 END TEST raid_state_function_test 00:10:33.339 ************************************ 00:10:33.339 02:24:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:33.339 02:24:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:33.339 02:24:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.339 02:24:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.339 ************************************ 00:10:33.339 START TEST raid_state_function_test_sb 00:10:33.339 ************************************ 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80904 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80904' 00:10:33.339 Process raid pid: 80904 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80904 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80904 ']' 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:33.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:33.339 02:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.339 [2024-10-13 02:24:51.904318] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:33.339 [2024-10-13 02:24:51.904478] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.599 [2024-10-13 02:24:52.050570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.599 [2024-10-13 02:24:52.127513] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.599 [2024-10-13 02:24:52.208747] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.599 [2024-10-13 02:24:52.208801] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.169 [2024-10-13 02:24:52.735636] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.169 [2024-10-13 02:24:52.735699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.169 [2024-10-13 02:24:52.735714] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.169 [2024-10-13 02:24:52.735726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.169 [2024-10-13 02:24:52.735736] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.169 [2024-10-13 02:24:52.735749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.169 [2024-10-13 02:24:52.735756] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:34.169 [2024-10-13 02:24:52.735766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.169 "name": "Existed_Raid", 00:10:34.169 "uuid": "84c69503-5f99-4b7d-9ec3-2c97d1a8c64c", 00:10:34.169 "strip_size_kb": 64, 00:10:34.169 "state": "configuring", 00:10:34.169 "raid_level": "raid0", 00:10:34.169 "superblock": true, 00:10:34.169 "num_base_bdevs": 4, 00:10:34.169 "num_base_bdevs_discovered": 0, 00:10:34.169 "num_base_bdevs_operational": 4, 00:10:34.169 "base_bdevs_list": [ 00:10:34.169 { 00:10:34.169 "name": "BaseBdev1", 00:10:34.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.169 "is_configured": false, 00:10:34.169 "data_offset": 0, 00:10:34.169 "data_size": 0 00:10:34.169 }, 00:10:34.169 { 00:10:34.169 "name": "BaseBdev2", 00:10:34.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.169 "is_configured": false, 00:10:34.169 "data_offset": 0, 00:10:34.169 "data_size": 0 00:10:34.169 }, 00:10:34.169 { 00:10:34.169 "name": "BaseBdev3", 00:10:34.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.169 "is_configured": false, 00:10:34.169 "data_offset": 0, 00:10:34.169 "data_size": 0 00:10:34.169 }, 00:10:34.169 { 00:10:34.169 "name": "BaseBdev4", 00:10:34.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.169 "is_configured": false, 00:10:34.169 "data_offset": 0, 00:10:34.169 "data_size": 0 00:10:34.169 } 00:10:34.169 ] 00:10:34.169 }' 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.169 02:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.738 [2024-10-13 02:24:53.150685] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.738 [2024-10-13 02:24:53.150745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.738 [2024-10-13 02:24:53.162669] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.738 [2024-10-13 02:24:53.162716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.738 [2024-10-13 02:24:53.162724] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.738 [2024-10-13 02:24:53.162751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.738 [2024-10-13 02:24:53.162757] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.738 [2024-10-13 02:24:53.162767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.738 [2024-10-13 02:24:53.162773] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:34.738 [2024-10-13 02:24:53.162783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.738 [2024-10-13 02:24:53.190664] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.738 BaseBdev1 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.738 [ 00:10:34.738 { 00:10:34.738 "name": "BaseBdev1", 00:10:34.738 "aliases": [ 00:10:34.738 "2fe32924-93b7-4fe8-b4df-8e2573cf303f" 00:10:34.738 ], 00:10:34.738 "product_name": "Malloc disk", 00:10:34.738 "block_size": 512, 00:10:34.738 "num_blocks": 65536, 00:10:34.738 "uuid": "2fe32924-93b7-4fe8-b4df-8e2573cf303f", 00:10:34.738 "assigned_rate_limits": { 00:10:34.738 "rw_ios_per_sec": 0, 00:10:34.738 "rw_mbytes_per_sec": 0, 00:10:34.738 "r_mbytes_per_sec": 0, 00:10:34.738 "w_mbytes_per_sec": 0 00:10:34.738 }, 00:10:34.738 "claimed": true, 00:10:34.738 "claim_type": "exclusive_write", 00:10:34.738 "zoned": false, 00:10:34.738 "supported_io_types": { 00:10:34.738 "read": true, 00:10:34.738 "write": true, 00:10:34.738 "unmap": true, 00:10:34.738 "flush": true, 00:10:34.738 "reset": true, 00:10:34.738 "nvme_admin": false, 00:10:34.738 "nvme_io": false, 00:10:34.738 "nvme_io_md": false, 00:10:34.738 "write_zeroes": true, 00:10:34.738 "zcopy": true, 00:10:34.738 "get_zone_info": false, 00:10:34.738 "zone_management": false, 00:10:34.738 "zone_append": false, 00:10:34.738 "compare": false, 00:10:34.738 "compare_and_write": false, 00:10:34.738 "abort": true, 00:10:34.738 "seek_hole": false, 00:10:34.738 "seek_data": false, 00:10:34.738 "copy": true, 00:10:34.738 "nvme_iov_md": false 00:10:34.738 }, 00:10:34.738 "memory_domains": [ 00:10:34.738 { 00:10:34.738 "dma_device_id": "system", 00:10:34.738 "dma_device_type": 1 00:10:34.738 }, 00:10:34.738 { 00:10:34.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.738 "dma_device_type": 2 00:10:34.738 } 00:10:34.738 ], 00:10:34.738 "driver_specific": {} 00:10:34.738 } 00:10:34.738 ] 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.738 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.739 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.739 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.739 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.739 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.739 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.739 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.739 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.739 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.739 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.739 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.739 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.739 "name": "Existed_Raid", 00:10:34.739 "uuid": "3f8c8474-2aa8-4991-8492-37f5ed6aaf7f", 00:10:34.739 "strip_size_kb": 64, 00:10:34.739 "state": "configuring", 00:10:34.739 "raid_level": "raid0", 00:10:34.739 "superblock": true, 00:10:34.739 "num_base_bdevs": 4, 00:10:34.739 "num_base_bdevs_discovered": 1, 00:10:34.739 "num_base_bdevs_operational": 4, 00:10:34.739 "base_bdevs_list": [ 00:10:34.739 { 00:10:34.739 "name": "BaseBdev1", 00:10:34.739 "uuid": "2fe32924-93b7-4fe8-b4df-8e2573cf303f", 00:10:34.739 "is_configured": true, 00:10:34.739 "data_offset": 2048, 00:10:34.739 "data_size": 63488 00:10:34.739 }, 00:10:34.739 { 00:10:34.739 "name": "BaseBdev2", 00:10:34.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.739 "is_configured": false, 00:10:34.739 "data_offset": 0, 00:10:34.739 "data_size": 0 00:10:34.739 }, 00:10:34.739 { 00:10:34.739 "name": "BaseBdev3", 00:10:34.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.739 "is_configured": false, 00:10:34.739 "data_offset": 0, 00:10:34.739 "data_size": 0 00:10:34.739 }, 00:10:34.739 { 00:10:34.739 "name": "BaseBdev4", 00:10:34.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.739 "is_configured": false, 00:10:34.739 "data_offset": 0, 00:10:34.739 "data_size": 0 00:10:34.739 } 00:10:34.739 ] 00:10:34.739 }' 00:10:34.739 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.739 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.998 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.998 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.998 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.998 [2024-10-13 02:24:53.645991] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.998 [2024-10-13 02:24:53.646063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:34.998 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.998 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:34.998 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.998 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.998 [2024-10-13 02:24:53.658027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.998 [2024-10-13 02:24:53.660331] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.998 [2024-10-13 02:24:53.660375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.998 [2024-10-13 02:24:53.660384] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.998 [2024-10-13 02:24:53.660393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.998 [2024-10-13 02:24:53.660415] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:34.998 [2024-10-13 02:24:53.660423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:34.998 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.998 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:34.998 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.998 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.998 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.998 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.999 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.999 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.999 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.999 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.999 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.999 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.999 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.999 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.999 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.999 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.999 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.258 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.258 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.258 "name": "Existed_Raid", 00:10:35.258 "uuid": "9bb118cb-8964-4b73-8d25-91b103b4df55", 00:10:35.258 "strip_size_kb": 64, 00:10:35.258 "state": "configuring", 00:10:35.258 "raid_level": "raid0", 00:10:35.258 "superblock": true, 00:10:35.259 "num_base_bdevs": 4, 00:10:35.259 "num_base_bdevs_discovered": 1, 00:10:35.259 "num_base_bdevs_operational": 4, 00:10:35.259 "base_bdevs_list": [ 00:10:35.259 { 00:10:35.259 "name": "BaseBdev1", 00:10:35.259 "uuid": "2fe32924-93b7-4fe8-b4df-8e2573cf303f", 00:10:35.259 "is_configured": true, 00:10:35.259 "data_offset": 2048, 00:10:35.259 "data_size": 63488 00:10:35.259 }, 00:10:35.259 { 00:10:35.259 "name": "BaseBdev2", 00:10:35.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.259 "is_configured": false, 00:10:35.259 "data_offset": 0, 00:10:35.259 "data_size": 0 00:10:35.259 }, 00:10:35.259 { 00:10:35.259 "name": "BaseBdev3", 00:10:35.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.259 "is_configured": false, 00:10:35.259 "data_offset": 0, 00:10:35.259 "data_size": 0 00:10:35.259 }, 00:10:35.259 { 00:10:35.259 "name": "BaseBdev4", 00:10:35.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.259 "is_configured": false, 00:10:35.259 "data_offset": 0, 00:10:35.259 "data_size": 0 00:10:35.259 } 00:10:35.259 ] 00:10:35.259 }' 00:10:35.259 02:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.259 02:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.519 [2024-10-13 02:24:54.085303] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.519 BaseBdev2 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.519 [ 00:10:35.519 { 00:10:35.519 "name": "BaseBdev2", 00:10:35.519 "aliases": [ 00:10:35.519 "a0c47056-4479-453e-bb79-51050a4885a8" 00:10:35.519 ], 00:10:35.519 "product_name": "Malloc disk", 00:10:35.519 "block_size": 512, 00:10:35.519 "num_blocks": 65536, 00:10:35.519 "uuid": "a0c47056-4479-453e-bb79-51050a4885a8", 00:10:35.519 "assigned_rate_limits": { 00:10:35.519 "rw_ios_per_sec": 0, 00:10:35.519 "rw_mbytes_per_sec": 0, 00:10:35.519 "r_mbytes_per_sec": 0, 00:10:35.519 "w_mbytes_per_sec": 0 00:10:35.519 }, 00:10:35.519 "claimed": true, 00:10:35.519 "claim_type": "exclusive_write", 00:10:35.519 "zoned": false, 00:10:35.519 "supported_io_types": { 00:10:35.519 "read": true, 00:10:35.519 "write": true, 00:10:35.519 "unmap": true, 00:10:35.519 "flush": true, 00:10:35.519 "reset": true, 00:10:35.519 "nvme_admin": false, 00:10:35.519 "nvme_io": false, 00:10:35.519 "nvme_io_md": false, 00:10:35.519 "write_zeroes": true, 00:10:35.519 "zcopy": true, 00:10:35.519 "get_zone_info": false, 00:10:35.519 "zone_management": false, 00:10:35.519 "zone_append": false, 00:10:35.519 "compare": false, 00:10:35.519 "compare_and_write": false, 00:10:35.519 "abort": true, 00:10:35.519 "seek_hole": false, 00:10:35.519 "seek_data": false, 00:10:35.519 "copy": true, 00:10:35.519 "nvme_iov_md": false 00:10:35.519 }, 00:10:35.519 "memory_domains": [ 00:10:35.519 { 00:10:35.519 "dma_device_id": "system", 00:10:35.519 "dma_device_type": 1 00:10:35.519 }, 00:10:35.519 { 00:10:35.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.519 "dma_device_type": 2 00:10:35.519 } 00:10:35.519 ], 00:10:35.519 "driver_specific": {} 00:10:35.519 } 00:10:35.519 ] 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.519 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.519 "name": "Existed_Raid", 00:10:35.519 "uuid": "9bb118cb-8964-4b73-8d25-91b103b4df55", 00:10:35.519 "strip_size_kb": 64, 00:10:35.519 "state": "configuring", 00:10:35.519 "raid_level": "raid0", 00:10:35.519 "superblock": true, 00:10:35.519 "num_base_bdevs": 4, 00:10:35.519 "num_base_bdevs_discovered": 2, 00:10:35.519 "num_base_bdevs_operational": 4, 00:10:35.519 "base_bdevs_list": [ 00:10:35.519 { 00:10:35.519 "name": "BaseBdev1", 00:10:35.519 "uuid": "2fe32924-93b7-4fe8-b4df-8e2573cf303f", 00:10:35.519 "is_configured": true, 00:10:35.519 "data_offset": 2048, 00:10:35.519 "data_size": 63488 00:10:35.519 }, 00:10:35.519 { 00:10:35.519 "name": "BaseBdev2", 00:10:35.519 "uuid": "a0c47056-4479-453e-bb79-51050a4885a8", 00:10:35.519 "is_configured": true, 00:10:35.519 "data_offset": 2048, 00:10:35.519 "data_size": 63488 00:10:35.520 }, 00:10:35.520 { 00:10:35.520 "name": "BaseBdev3", 00:10:35.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.520 "is_configured": false, 00:10:35.520 "data_offset": 0, 00:10:35.520 "data_size": 0 00:10:35.520 }, 00:10:35.520 { 00:10:35.520 "name": "BaseBdev4", 00:10:35.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.520 "is_configured": false, 00:10:35.520 "data_offset": 0, 00:10:35.520 "data_size": 0 00:10:35.520 } 00:10:35.520 ] 00:10:35.520 }' 00:10:35.520 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.520 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.090 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:36.090 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.090 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.090 [2024-10-13 02:24:54.582648] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.090 BaseBdev3 00:10:36.090 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.090 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:36.090 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:36.090 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:36.090 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:36.090 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:36.090 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:36.090 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:36.090 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.090 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.090 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.090 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:36.090 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.090 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.090 [ 00:10:36.090 { 00:10:36.090 "name": "BaseBdev3", 00:10:36.090 "aliases": [ 00:10:36.090 "96bd832f-774d-4809-b471-472720c33a81" 00:10:36.090 ], 00:10:36.090 "product_name": "Malloc disk", 00:10:36.090 "block_size": 512, 00:10:36.090 "num_blocks": 65536, 00:10:36.090 "uuid": "96bd832f-774d-4809-b471-472720c33a81", 00:10:36.090 "assigned_rate_limits": { 00:10:36.090 "rw_ios_per_sec": 0, 00:10:36.090 "rw_mbytes_per_sec": 0, 00:10:36.090 "r_mbytes_per_sec": 0, 00:10:36.090 "w_mbytes_per_sec": 0 00:10:36.090 }, 00:10:36.090 "claimed": true, 00:10:36.090 "claim_type": "exclusive_write", 00:10:36.090 "zoned": false, 00:10:36.090 "supported_io_types": { 00:10:36.090 "read": true, 00:10:36.090 "write": true, 00:10:36.090 "unmap": true, 00:10:36.090 "flush": true, 00:10:36.090 "reset": true, 00:10:36.090 "nvme_admin": false, 00:10:36.090 "nvme_io": false, 00:10:36.090 "nvme_io_md": false, 00:10:36.090 "write_zeroes": true, 00:10:36.090 "zcopy": true, 00:10:36.090 "get_zone_info": false, 00:10:36.090 "zone_management": false, 00:10:36.090 "zone_append": false, 00:10:36.090 "compare": false, 00:10:36.090 "compare_and_write": false, 00:10:36.090 "abort": true, 00:10:36.090 "seek_hole": false, 00:10:36.090 "seek_data": false, 00:10:36.090 "copy": true, 00:10:36.090 "nvme_iov_md": false 00:10:36.090 }, 00:10:36.090 "memory_domains": [ 00:10:36.090 { 00:10:36.090 "dma_device_id": "system", 00:10:36.090 "dma_device_type": 1 00:10:36.090 }, 00:10:36.090 { 00:10:36.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.091 "dma_device_type": 2 00:10:36.091 } 00:10:36.091 ], 00:10:36.091 "driver_specific": {} 00:10:36.091 } 00:10:36.091 ] 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.091 "name": "Existed_Raid", 00:10:36.091 "uuid": "9bb118cb-8964-4b73-8d25-91b103b4df55", 00:10:36.091 "strip_size_kb": 64, 00:10:36.091 "state": "configuring", 00:10:36.091 "raid_level": "raid0", 00:10:36.091 "superblock": true, 00:10:36.091 "num_base_bdevs": 4, 00:10:36.091 "num_base_bdevs_discovered": 3, 00:10:36.091 "num_base_bdevs_operational": 4, 00:10:36.091 "base_bdevs_list": [ 00:10:36.091 { 00:10:36.091 "name": "BaseBdev1", 00:10:36.091 "uuid": "2fe32924-93b7-4fe8-b4df-8e2573cf303f", 00:10:36.091 "is_configured": true, 00:10:36.091 "data_offset": 2048, 00:10:36.091 "data_size": 63488 00:10:36.091 }, 00:10:36.091 { 00:10:36.091 "name": "BaseBdev2", 00:10:36.091 "uuid": "a0c47056-4479-453e-bb79-51050a4885a8", 00:10:36.091 "is_configured": true, 00:10:36.091 "data_offset": 2048, 00:10:36.091 "data_size": 63488 00:10:36.091 }, 00:10:36.091 { 00:10:36.091 "name": "BaseBdev3", 00:10:36.091 "uuid": "96bd832f-774d-4809-b471-472720c33a81", 00:10:36.091 "is_configured": true, 00:10:36.091 "data_offset": 2048, 00:10:36.091 "data_size": 63488 00:10:36.091 }, 00:10:36.091 { 00:10:36.091 "name": "BaseBdev4", 00:10:36.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.091 "is_configured": false, 00:10:36.091 "data_offset": 0, 00:10:36.091 "data_size": 0 00:10:36.091 } 00:10:36.091 ] 00:10:36.091 }' 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.091 02:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.661 [2024-10-13 02:24:55.087730] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:36.661 [2024-10-13 02:24:55.088009] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:36.661 [2024-10-13 02:24:55.088026] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:36.661 [2024-10-13 02:24:55.088370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:36.661 BaseBdev4 00:10:36.661 [2024-10-13 02:24:55.088524] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:36.661 [2024-10-13 02:24:55.088556] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:36.661 [2024-10-13 02:24:55.088695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.661 [ 00:10:36.661 { 00:10:36.661 "name": "BaseBdev4", 00:10:36.661 "aliases": [ 00:10:36.661 "bef3ee7d-1226-401a-b2af-15a4f3bcbb1a" 00:10:36.661 ], 00:10:36.661 "product_name": "Malloc disk", 00:10:36.661 "block_size": 512, 00:10:36.661 "num_blocks": 65536, 00:10:36.661 "uuid": "bef3ee7d-1226-401a-b2af-15a4f3bcbb1a", 00:10:36.661 "assigned_rate_limits": { 00:10:36.661 "rw_ios_per_sec": 0, 00:10:36.661 "rw_mbytes_per_sec": 0, 00:10:36.661 "r_mbytes_per_sec": 0, 00:10:36.661 "w_mbytes_per_sec": 0 00:10:36.661 }, 00:10:36.661 "claimed": true, 00:10:36.661 "claim_type": "exclusive_write", 00:10:36.661 "zoned": false, 00:10:36.661 "supported_io_types": { 00:10:36.661 "read": true, 00:10:36.661 "write": true, 00:10:36.661 "unmap": true, 00:10:36.661 "flush": true, 00:10:36.661 "reset": true, 00:10:36.661 "nvme_admin": false, 00:10:36.661 "nvme_io": false, 00:10:36.661 "nvme_io_md": false, 00:10:36.661 "write_zeroes": true, 00:10:36.661 "zcopy": true, 00:10:36.661 "get_zone_info": false, 00:10:36.661 "zone_management": false, 00:10:36.661 "zone_append": false, 00:10:36.661 "compare": false, 00:10:36.661 "compare_and_write": false, 00:10:36.661 "abort": true, 00:10:36.661 "seek_hole": false, 00:10:36.661 "seek_data": false, 00:10:36.661 "copy": true, 00:10:36.661 "nvme_iov_md": false 00:10:36.661 }, 00:10:36.661 "memory_domains": [ 00:10:36.661 { 00:10:36.661 "dma_device_id": "system", 00:10:36.661 "dma_device_type": 1 00:10:36.661 }, 00:10:36.661 { 00:10:36.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.661 "dma_device_type": 2 00:10:36.661 } 00:10:36.661 ], 00:10:36.661 "driver_specific": {} 00:10:36.661 } 00:10:36.661 ] 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.661 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.661 "name": "Existed_Raid", 00:10:36.661 "uuid": "9bb118cb-8964-4b73-8d25-91b103b4df55", 00:10:36.661 "strip_size_kb": 64, 00:10:36.661 "state": "online", 00:10:36.661 "raid_level": "raid0", 00:10:36.661 "superblock": true, 00:10:36.661 "num_base_bdevs": 4, 00:10:36.661 "num_base_bdevs_discovered": 4, 00:10:36.661 "num_base_bdevs_operational": 4, 00:10:36.661 "base_bdevs_list": [ 00:10:36.661 { 00:10:36.661 "name": "BaseBdev1", 00:10:36.661 "uuid": "2fe32924-93b7-4fe8-b4df-8e2573cf303f", 00:10:36.661 "is_configured": true, 00:10:36.661 "data_offset": 2048, 00:10:36.661 "data_size": 63488 00:10:36.661 }, 00:10:36.661 { 00:10:36.661 "name": "BaseBdev2", 00:10:36.661 "uuid": "a0c47056-4479-453e-bb79-51050a4885a8", 00:10:36.661 "is_configured": true, 00:10:36.661 "data_offset": 2048, 00:10:36.661 "data_size": 63488 00:10:36.661 }, 00:10:36.661 { 00:10:36.661 "name": "BaseBdev3", 00:10:36.661 "uuid": "96bd832f-774d-4809-b471-472720c33a81", 00:10:36.661 "is_configured": true, 00:10:36.661 "data_offset": 2048, 00:10:36.661 "data_size": 63488 00:10:36.661 }, 00:10:36.661 { 00:10:36.661 "name": "BaseBdev4", 00:10:36.661 "uuid": "bef3ee7d-1226-401a-b2af-15a4f3bcbb1a", 00:10:36.661 "is_configured": true, 00:10:36.661 "data_offset": 2048, 00:10:36.662 "data_size": 63488 00:10:36.662 } 00:10:36.662 ] 00:10:36.662 }' 00:10:36.662 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.662 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.921 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:36.921 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:36.921 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.921 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.921 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.921 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.921 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:36.921 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.921 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.921 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.921 [2024-10-13 02:24:55.575375] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.921 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.183 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:37.183 "name": "Existed_Raid", 00:10:37.183 "aliases": [ 00:10:37.183 "9bb118cb-8964-4b73-8d25-91b103b4df55" 00:10:37.183 ], 00:10:37.183 "product_name": "Raid Volume", 00:10:37.183 "block_size": 512, 00:10:37.183 "num_blocks": 253952, 00:10:37.183 "uuid": "9bb118cb-8964-4b73-8d25-91b103b4df55", 00:10:37.183 "assigned_rate_limits": { 00:10:37.183 "rw_ios_per_sec": 0, 00:10:37.183 "rw_mbytes_per_sec": 0, 00:10:37.183 "r_mbytes_per_sec": 0, 00:10:37.183 "w_mbytes_per_sec": 0 00:10:37.183 }, 00:10:37.183 "claimed": false, 00:10:37.183 "zoned": false, 00:10:37.183 "supported_io_types": { 00:10:37.183 "read": true, 00:10:37.183 "write": true, 00:10:37.183 "unmap": true, 00:10:37.183 "flush": true, 00:10:37.183 "reset": true, 00:10:37.183 "nvme_admin": false, 00:10:37.183 "nvme_io": false, 00:10:37.183 "nvme_io_md": false, 00:10:37.183 "write_zeroes": true, 00:10:37.183 "zcopy": false, 00:10:37.183 "get_zone_info": false, 00:10:37.183 "zone_management": false, 00:10:37.183 "zone_append": false, 00:10:37.183 "compare": false, 00:10:37.183 "compare_and_write": false, 00:10:37.183 "abort": false, 00:10:37.183 "seek_hole": false, 00:10:37.183 "seek_data": false, 00:10:37.183 "copy": false, 00:10:37.183 "nvme_iov_md": false 00:10:37.183 }, 00:10:37.183 "memory_domains": [ 00:10:37.183 { 00:10:37.183 "dma_device_id": "system", 00:10:37.183 "dma_device_type": 1 00:10:37.183 }, 00:10:37.183 { 00:10:37.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.183 "dma_device_type": 2 00:10:37.183 }, 00:10:37.183 { 00:10:37.183 "dma_device_id": "system", 00:10:37.183 "dma_device_type": 1 00:10:37.183 }, 00:10:37.183 { 00:10:37.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.183 "dma_device_type": 2 00:10:37.183 }, 00:10:37.183 { 00:10:37.183 "dma_device_id": "system", 00:10:37.183 "dma_device_type": 1 00:10:37.183 }, 00:10:37.183 { 00:10:37.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.183 "dma_device_type": 2 00:10:37.183 }, 00:10:37.183 { 00:10:37.183 "dma_device_id": "system", 00:10:37.183 "dma_device_type": 1 00:10:37.183 }, 00:10:37.183 { 00:10:37.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.183 "dma_device_type": 2 00:10:37.183 } 00:10:37.183 ], 00:10:37.183 "driver_specific": { 00:10:37.183 "raid": { 00:10:37.183 "uuid": "9bb118cb-8964-4b73-8d25-91b103b4df55", 00:10:37.184 "strip_size_kb": 64, 00:10:37.184 "state": "online", 00:10:37.184 "raid_level": "raid0", 00:10:37.184 "superblock": true, 00:10:37.184 "num_base_bdevs": 4, 00:10:37.184 "num_base_bdevs_discovered": 4, 00:10:37.184 "num_base_bdevs_operational": 4, 00:10:37.184 "base_bdevs_list": [ 00:10:37.184 { 00:10:37.184 "name": "BaseBdev1", 00:10:37.184 "uuid": "2fe32924-93b7-4fe8-b4df-8e2573cf303f", 00:10:37.184 "is_configured": true, 00:10:37.184 "data_offset": 2048, 00:10:37.184 "data_size": 63488 00:10:37.184 }, 00:10:37.184 { 00:10:37.184 "name": "BaseBdev2", 00:10:37.184 "uuid": "a0c47056-4479-453e-bb79-51050a4885a8", 00:10:37.184 "is_configured": true, 00:10:37.184 "data_offset": 2048, 00:10:37.184 "data_size": 63488 00:10:37.184 }, 00:10:37.184 { 00:10:37.184 "name": "BaseBdev3", 00:10:37.184 "uuid": "96bd832f-774d-4809-b471-472720c33a81", 00:10:37.184 "is_configured": true, 00:10:37.184 "data_offset": 2048, 00:10:37.184 "data_size": 63488 00:10:37.184 }, 00:10:37.184 { 00:10:37.184 "name": "BaseBdev4", 00:10:37.184 "uuid": "bef3ee7d-1226-401a-b2af-15a4f3bcbb1a", 00:10:37.184 "is_configured": true, 00:10:37.184 "data_offset": 2048, 00:10:37.184 "data_size": 63488 00:10:37.184 } 00:10:37.184 ] 00:10:37.184 } 00:10:37.184 } 00:10:37.184 }' 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:37.184 BaseBdev2 00:10:37.184 BaseBdev3 00:10:37.184 BaseBdev4' 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.184 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.184 [2024-10-13 02:24:55.854522] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:37.184 [2024-10-13 02:24:55.854568] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.184 [2024-10-13 02:24:55.854634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.454 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.454 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:37.454 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:37.454 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:37.454 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:37.454 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:37.454 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:37.454 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.454 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:37.454 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.454 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.454 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.454 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.454 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.454 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.454 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.455 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.455 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.455 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.455 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.455 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.455 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.455 "name": "Existed_Raid", 00:10:37.455 "uuid": "9bb118cb-8964-4b73-8d25-91b103b4df55", 00:10:37.455 "strip_size_kb": 64, 00:10:37.455 "state": "offline", 00:10:37.455 "raid_level": "raid0", 00:10:37.455 "superblock": true, 00:10:37.455 "num_base_bdevs": 4, 00:10:37.455 "num_base_bdevs_discovered": 3, 00:10:37.455 "num_base_bdevs_operational": 3, 00:10:37.455 "base_bdevs_list": [ 00:10:37.455 { 00:10:37.455 "name": null, 00:10:37.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.455 "is_configured": false, 00:10:37.455 "data_offset": 0, 00:10:37.455 "data_size": 63488 00:10:37.455 }, 00:10:37.455 { 00:10:37.455 "name": "BaseBdev2", 00:10:37.455 "uuid": "a0c47056-4479-453e-bb79-51050a4885a8", 00:10:37.455 "is_configured": true, 00:10:37.455 "data_offset": 2048, 00:10:37.455 "data_size": 63488 00:10:37.455 }, 00:10:37.455 { 00:10:37.455 "name": "BaseBdev3", 00:10:37.455 "uuid": "96bd832f-774d-4809-b471-472720c33a81", 00:10:37.455 "is_configured": true, 00:10:37.455 "data_offset": 2048, 00:10:37.455 "data_size": 63488 00:10:37.455 }, 00:10:37.455 { 00:10:37.455 "name": "BaseBdev4", 00:10:37.455 "uuid": "bef3ee7d-1226-401a-b2af-15a4f3bcbb1a", 00:10:37.455 "is_configured": true, 00:10:37.455 "data_offset": 2048, 00:10:37.455 "data_size": 63488 00:10:37.455 } 00:10:37.455 ] 00:10:37.455 }' 00:10:37.455 02:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.455 02:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.726 [2024-10-13 02:24:56.363785] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.726 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.987 [2024-10-13 02:24:56.449314] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.987 [2024-10-13 02:24:56.530710] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:37.987 [2024-10-13 02:24:56.530818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.987 BaseBdev2 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:37.987 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.988 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.988 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.988 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:37.988 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.988 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.988 [ 00:10:37.988 { 00:10:37.988 "name": "BaseBdev2", 00:10:37.988 "aliases": [ 00:10:37.988 "c9697da7-65ea-43ca-95cc-0cb6f017c84d" 00:10:37.988 ], 00:10:37.988 "product_name": "Malloc disk", 00:10:37.988 "block_size": 512, 00:10:37.988 "num_blocks": 65536, 00:10:37.988 "uuid": "c9697da7-65ea-43ca-95cc-0cb6f017c84d", 00:10:37.988 "assigned_rate_limits": { 00:10:37.988 "rw_ios_per_sec": 0, 00:10:37.988 "rw_mbytes_per_sec": 0, 00:10:37.988 "r_mbytes_per_sec": 0, 00:10:37.988 "w_mbytes_per_sec": 0 00:10:37.988 }, 00:10:37.988 "claimed": false, 00:10:37.988 "zoned": false, 00:10:37.988 "supported_io_types": { 00:10:37.988 "read": true, 00:10:37.988 "write": true, 00:10:37.988 "unmap": true, 00:10:37.988 "flush": true, 00:10:37.988 "reset": true, 00:10:37.988 "nvme_admin": false, 00:10:37.988 "nvme_io": false, 00:10:37.988 "nvme_io_md": false, 00:10:37.988 "write_zeroes": true, 00:10:37.988 "zcopy": true, 00:10:37.988 "get_zone_info": false, 00:10:37.988 "zone_management": false, 00:10:37.988 "zone_append": false, 00:10:37.988 "compare": false, 00:10:37.988 "compare_and_write": false, 00:10:37.988 "abort": true, 00:10:37.988 "seek_hole": false, 00:10:37.988 "seek_data": false, 00:10:37.988 "copy": true, 00:10:37.988 "nvme_iov_md": false 00:10:37.988 }, 00:10:37.988 "memory_domains": [ 00:10:37.988 { 00:10:37.988 "dma_device_id": "system", 00:10:37.988 "dma_device_type": 1 00:10:37.988 }, 00:10:37.988 { 00:10:37.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.988 "dma_device_type": 2 00:10:37.988 } 00:10:37.988 ], 00:10:37.988 "driver_specific": {} 00:10:37.988 } 00:10:37.988 ] 00:10:37.988 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.988 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:37.988 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:37.988 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:37.988 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:37.988 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.988 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.249 BaseBdev3 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.249 [ 00:10:38.249 { 00:10:38.249 "name": "BaseBdev3", 00:10:38.249 "aliases": [ 00:10:38.249 "1e2609a9-e6b1-4a9d-95f0-ce4b74f6ac2a" 00:10:38.249 ], 00:10:38.249 "product_name": "Malloc disk", 00:10:38.249 "block_size": 512, 00:10:38.249 "num_blocks": 65536, 00:10:38.249 "uuid": "1e2609a9-e6b1-4a9d-95f0-ce4b74f6ac2a", 00:10:38.249 "assigned_rate_limits": { 00:10:38.249 "rw_ios_per_sec": 0, 00:10:38.249 "rw_mbytes_per_sec": 0, 00:10:38.249 "r_mbytes_per_sec": 0, 00:10:38.249 "w_mbytes_per_sec": 0 00:10:38.249 }, 00:10:38.249 "claimed": false, 00:10:38.249 "zoned": false, 00:10:38.249 "supported_io_types": { 00:10:38.249 "read": true, 00:10:38.249 "write": true, 00:10:38.249 "unmap": true, 00:10:38.249 "flush": true, 00:10:38.249 "reset": true, 00:10:38.249 "nvme_admin": false, 00:10:38.249 "nvme_io": false, 00:10:38.249 "nvme_io_md": false, 00:10:38.249 "write_zeroes": true, 00:10:38.249 "zcopy": true, 00:10:38.249 "get_zone_info": false, 00:10:38.249 "zone_management": false, 00:10:38.249 "zone_append": false, 00:10:38.249 "compare": false, 00:10:38.249 "compare_and_write": false, 00:10:38.249 "abort": true, 00:10:38.249 "seek_hole": false, 00:10:38.249 "seek_data": false, 00:10:38.249 "copy": true, 00:10:38.249 "nvme_iov_md": false 00:10:38.249 }, 00:10:38.249 "memory_domains": [ 00:10:38.249 { 00:10:38.249 "dma_device_id": "system", 00:10:38.249 "dma_device_type": 1 00:10:38.249 }, 00:10:38.249 { 00:10:38.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.249 "dma_device_type": 2 00:10:38.249 } 00:10:38.249 ], 00:10:38.249 "driver_specific": {} 00:10:38.249 } 00:10:38.249 ] 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.249 BaseBdev4 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.249 [ 00:10:38.249 { 00:10:38.249 "name": "BaseBdev4", 00:10:38.249 "aliases": [ 00:10:38.249 "5ad77fee-afdf-47d0-bc99-bdbba87de886" 00:10:38.249 ], 00:10:38.249 "product_name": "Malloc disk", 00:10:38.249 "block_size": 512, 00:10:38.249 "num_blocks": 65536, 00:10:38.249 "uuid": "5ad77fee-afdf-47d0-bc99-bdbba87de886", 00:10:38.249 "assigned_rate_limits": { 00:10:38.249 "rw_ios_per_sec": 0, 00:10:38.249 "rw_mbytes_per_sec": 0, 00:10:38.249 "r_mbytes_per_sec": 0, 00:10:38.249 "w_mbytes_per_sec": 0 00:10:38.249 }, 00:10:38.249 "claimed": false, 00:10:38.249 "zoned": false, 00:10:38.249 "supported_io_types": { 00:10:38.249 "read": true, 00:10:38.249 "write": true, 00:10:38.249 "unmap": true, 00:10:38.249 "flush": true, 00:10:38.249 "reset": true, 00:10:38.249 "nvme_admin": false, 00:10:38.249 "nvme_io": false, 00:10:38.249 "nvme_io_md": false, 00:10:38.249 "write_zeroes": true, 00:10:38.249 "zcopy": true, 00:10:38.249 "get_zone_info": false, 00:10:38.249 "zone_management": false, 00:10:38.249 "zone_append": false, 00:10:38.249 "compare": false, 00:10:38.249 "compare_and_write": false, 00:10:38.249 "abort": true, 00:10:38.249 "seek_hole": false, 00:10:38.249 "seek_data": false, 00:10:38.249 "copy": true, 00:10:38.249 "nvme_iov_md": false 00:10:38.249 }, 00:10:38.249 "memory_domains": [ 00:10:38.249 { 00:10:38.249 "dma_device_id": "system", 00:10:38.249 "dma_device_type": 1 00:10:38.249 }, 00:10:38.249 { 00:10:38.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.249 "dma_device_type": 2 00:10:38.249 } 00:10:38.249 ], 00:10:38.249 "driver_specific": {} 00:10:38.249 } 00:10:38.249 ] 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.249 [2024-10-13 02:24:56.794754] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.249 [2024-10-13 02:24:56.794849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.249 [2024-10-13 02:24:56.794942] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.249 [2024-10-13 02:24:56.797171] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.249 [2024-10-13 02:24:56.797265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.249 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.249 "name": "Existed_Raid", 00:10:38.249 "uuid": "38d01111-8528-4e69-8345-49234a9a8c47", 00:10:38.249 "strip_size_kb": 64, 00:10:38.250 "state": "configuring", 00:10:38.250 "raid_level": "raid0", 00:10:38.250 "superblock": true, 00:10:38.250 "num_base_bdevs": 4, 00:10:38.250 "num_base_bdevs_discovered": 3, 00:10:38.250 "num_base_bdevs_operational": 4, 00:10:38.250 "base_bdevs_list": [ 00:10:38.250 { 00:10:38.250 "name": "BaseBdev1", 00:10:38.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.250 "is_configured": false, 00:10:38.250 "data_offset": 0, 00:10:38.250 "data_size": 0 00:10:38.250 }, 00:10:38.250 { 00:10:38.250 "name": "BaseBdev2", 00:10:38.250 "uuid": "c9697da7-65ea-43ca-95cc-0cb6f017c84d", 00:10:38.250 "is_configured": true, 00:10:38.250 "data_offset": 2048, 00:10:38.250 "data_size": 63488 00:10:38.250 }, 00:10:38.250 { 00:10:38.250 "name": "BaseBdev3", 00:10:38.250 "uuid": "1e2609a9-e6b1-4a9d-95f0-ce4b74f6ac2a", 00:10:38.250 "is_configured": true, 00:10:38.250 "data_offset": 2048, 00:10:38.250 "data_size": 63488 00:10:38.250 }, 00:10:38.250 { 00:10:38.250 "name": "BaseBdev4", 00:10:38.250 "uuid": "5ad77fee-afdf-47d0-bc99-bdbba87de886", 00:10:38.250 "is_configured": true, 00:10:38.250 "data_offset": 2048, 00:10:38.250 "data_size": 63488 00:10:38.250 } 00:10:38.250 ] 00:10:38.250 }' 00:10:38.250 02:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.250 02:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.820 [2024-10-13 02:24:57.265976] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.820 "name": "Existed_Raid", 00:10:38.820 "uuid": "38d01111-8528-4e69-8345-49234a9a8c47", 00:10:38.820 "strip_size_kb": 64, 00:10:38.820 "state": "configuring", 00:10:38.820 "raid_level": "raid0", 00:10:38.820 "superblock": true, 00:10:38.820 "num_base_bdevs": 4, 00:10:38.820 "num_base_bdevs_discovered": 2, 00:10:38.820 "num_base_bdevs_operational": 4, 00:10:38.820 "base_bdevs_list": [ 00:10:38.820 { 00:10:38.820 "name": "BaseBdev1", 00:10:38.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.820 "is_configured": false, 00:10:38.820 "data_offset": 0, 00:10:38.820 "data_size": 0 00:10:38.820 }, 00:10:38.820 { 00:10:38.820 "name": null, 00:10:38.820 "uuid": "c9697da7-65ea-43ca-95cc-0cb6f017c84d", 00:10:38.820 "is_configured": false, 00:10:38.820 "data_offset": 0, 00:10:38.820 "data_size": 63488 00:10:38.820 }, 00:10:38.820 { 00:10:38.820 "name": "BaseBdev3", 00:10:38.820 "uuid": "1e2609a9-e6b1-4a9d-95f0-ce4b74f6ac2a", 00:10:38.820 "is_configured": true, 00:10:38.820 "data_offset": 2048, 00:10:38.820 "data_size": 63488 00:10:38.820 }, 00:10:38.820 { 00:10:38.820 "name": "BaseBdev4", 00:10:38.820 "uuid": "5ad77fee-afdf-47d0-bc99-bdbba87de886", 00:10:38.820 "is_configured": true, 00:10:38.820 "data_offset": 2048, 00:10:38.820 "data_size": 63488 00:10:38.820 } 00:10:38.820 ] 00:10:38.820 }' 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.820 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.080 [2024-10-13 02:24:57.739038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.080 BaseBdev1 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.080 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.340 [ 00:10:39.340 { 00:10:39.340 "name": "BaseBdev1", 00:10:39.340 "aliases": [ 00:10:39.340 "0034324b-c8a9-40c3-8d10-82deee8be7dc" 00:10:39.340 ], 00:10:39.340 "product_name": "Malloc disk", 00:10:39.340 "block_size": 512, 00:10:39.340 "num_blocks": 65536, 00:10:39.340 "uuid": "0034324b-c8a9-40c3-8d10-82deee8be7dc", 00:10:39.340 "assigned_rate_limits": { 00:10:39.340 "rw_ios_per_sec": 0, 00:10:39.340 "rw_mbytes_per_sec": 0, 00:10:39.340 "r_mbytes_per_sec": 0, 00:10:39.340 "w_mbytes_per_sec": 0 00:10:39.340 }, 00:10:39.340 "claimed": true, 00:10:39.340 "claim_type": "exclusive_write", 00:10:39.340 "zoned": false, 00:10:39.340 "supported_io_types": { 00:10:39.340 "read": true, 00:10:39.340 "write": true, 00:10:39.340 "unmap": true, 00:10:39.340 "flush": true, 00:10:39.340 "reset": true, 00:10:39.340 "nvme_admin": false, 00:10:39.340 "nvme_io": false, 00:10:39.340 "nvme_io_md": false, 00:10:39.340 "write_zeroes": true, 00:10:39.340 "zcopy": true, 00:10:39.340 "get_zone_info": false, 00:10:39.340 "zone_management": false, 00:10:39.340 "zone_append": false, 00:10:39.340 "compare": false, 00:10:39.340 "compare_and_write": false, 00:10:39.340 "abort": true, 00:10:39.340 "seek_hole": false, 00:10:39.340 "seek_data": false, 00:10:39.340 "copy": true, 00:10:39.340 "nvme_iov_md": false 00:10:39.340 }, 00:10:39.340 "memory_domains": [ 00:10:39.340 { 00:10:39.340 "dma_device_id": "system", 00:10:39.340 "dma_device_type": 1 00:10:39.340 }, 00:10:39.340 { 00:10:39.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.340 "dma_device_type": 2 00:10:39.340 } 00:10:39.341 ], 00:10:39.341 "driver_specific": {} 00:10:39.341 } 00:10:39.341 ] 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.341 "name": "Existed_Raid", 00:10:39.341 "uuid": "38d01111-8528-4e69-8345-49234a9a8c47", 00:10:39.341 "strip_size_kb": 64, 00:10:39.341 "state": "configuring", 00:10:39.341 "raid_level": "raid0", 00:10:39.341 "superblock": true, 00:10:39.341 "num_base_bdevs": 4, 00:10:39.341 "num_base_bdevs_discovered": 3, 00:10:39.341 "num_base_bdevs_operational": 4, 00:10:39.341 "base_bdevs_list": [ 00:10:39.341 { 00:10:39.341 "name": "BaseBdev1", 00:10:39.341 "uuid": "0034324b-c8a9-40c3-8d10-82deee8be7dc", 00:10:39.341 "is_configured": true, 00:10:39.341 "data_offset": 2048, 00:10:39.341 "data_size": 63488 00:10:39.341 }, 00:10:39.341 { 00:10:39.341 "name": null, 00:10:39.341 "uuid": "c9697da7-65ea-43ca-95cc-0cb6f017c84d", 00:10:39.341 "is_configured": false, 00:10:39.341 "data_offset": 0, 00:10:39.341 "data_size": 63488 00:10:39.341 }, 00:10:39.341 { 00:10:39.341 "name": "BaseBdev3", 00:10:39.341 "uuid": "1e2609a9-e6b1-4a9d-95f0-ce4b74f6ac2a", 00:10:39.341 "is_configured": true, 00:10:39.341 "data_offset": 2048, 00:10:39.341 "data_size": 63488 00:10:39.341 }, 00:10:39.341 { 00:10:39.341 "name": "BaseBdev4", 00:10:39.341 "uuid": "5ad77fee-afdf-47d0-bc99-bdbba87de886", 00:10:39.341 "is_configured": true, 00:10:39.341 "data_offset": 2048, 00:10:39.341 "data_size": 63488 00:10:39.341 } 00:10:39.341 ] 00:10:39.341 }' 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.341 02:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.600 [2024-10-13 02:24:58.246278] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.600 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.860 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.860 "name": "Existed_Raid", 00:10:39.860 "uuid": "38d01111-8528-4e69-8345-49234a9a8c47", 00:10:39.860 "strip_size_kb": 64, 00:10:39.860 "state": "configuring", 00:10:39.860 "raid_level": "raid0", 00:10:39.860 "superblock": true, 00:10:39.860 "num_base_bdevs": 4, 00:10:39.860 "num_base_bdevs_discovered": 2, 00:10:39.860 "num_base_bdevs_operational": 4, 00:10:39.860 "base_bdevs_list": [ 00:10:39.860 { 00:10:39.860 "name": "BaseBdev1", 00:10:39.860 "uuid": "0034324b-c8a9-40c3-8d10-82deee8be7dc", 00:10:39.860 "is_configured": true, 00:10:39.860 "data_offset": 2048, 00:10:39.860 "data_size": 63488 00:10:39.860 }, 00:10:39.860 { 00:10:39.860 "name": null, 00:10:39.860 "uuid": "c9697da7-65ea-43ca-95cc-0cb6f017c84d", 00:10:39.860 "is_configured": false, 00:10:39.860 "data_offset": 0, 00:10:39.860 "data_size": 63488 00:10:39.860 }, 00:10:39.860 { 00:10:39.860 "name": null, 00:10:39.860 "uuid": "1e2609a9-e6b1-4a9d-95f0-ce4b74f6ac2a", 00:10:39.860 "is_configured": false, 00:10:39.860 "data_offset": 0, 00:10:39.860 "data_size": 63488 00:10:39.860 }, 00:10:39.860 { 00:10:39.860 "name": "BaseBdev4", 00:10:39.860 "uuid": "5ad77fee-afdf-47d0-bc99-bdbba87de886", 00:10:39.860 "is_configured": true, 00:10:39.860 "data_offset": 2048, 00:10:39.860 "data_size": 63488 00:10:39.860 } 00:10:39.860 ] 00:10:39.860 }' 00:10:39.860 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.860 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.120 [2024-10-13 02:24:58.705545] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.120 "name": "Existed_Raid", 00:10:40.120 "uuid": "38d01111-8528-4e69-8345-49234a9a8c47", 00:10:40.120 "strip_size_kb": 64, 00:10:40.120 "state": "configuring", 00:10:40.120 "raid_level": "raid0", 00:10:40.120 "superblock": true, 00:10:40.120 "num_base_bdevs": 4, 00:10:40.120 "num_base_bdevs_discovered": 3, 00:10:40.120 "num_base_bdevs_operational": 4, 00:10:40.120 "base_bdevs_list": [ 00:10:40.120 { 00:10:40.120 "name": "BaseBdev1", 00:10:40.120 "uuid": "0034324b-c8a9-40c3-8d10-82deee8be7dc", 00:10:40.120 "is_configured": true, 00:10:40.120 "data_offset": 2048, 00:10:40.120 "data_size": 63488 00:10:40.120 }, 00:10:40.120 { 00:10:40.120 "name": null, 00:10:40.120 "uuid": "c9697da7-65ea-43ca-95cc-0cb6f017c84d", 00:10:40.120 "is_configured": false, 00:10:40.120 "data_offset": 0, 00:10:40.120 "data_size": 63488 00:10:40.120 }, 00:10:40.120 { 00:10:40.120 "name": "BaseBdev3", 00:10:40.120 "uuid": "1e2609a9-e6b1-4a9d-95f0-ce4b74f6ac2a", 00:10:40.120 "is_configured": true, 00:10:40.120 "data_offset": 2048, 00:10:40.120 "data_size": 63488 00:10:40.120 }, 00:10:40.120 { 00:10:40.120 "name": "BaseBdev4", 00:10:40.120 "uuid": "5ad77fee-afdf-47d0-bc99-bdbba87de886", 00:10:40.120 "is_configured": true, 00:10:40.120 "data_offset": 2048, 00:10:40.120 "data_size": 63488 00:10:40.120 } 00:10:40.120 ] 00:10:40.120 }' 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.120 02:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.690 [2024-10-13 02:24:59.200680] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.690 "name": "Existed_Raid", 00:10:40.690 "uuid": "38d01111-8528-4e69-8345-49234a9a8c47", 00:10:40.690 "strip_size_kb": 64, 00:10:40.690 "state": "configuring", 00:10:40.690 "raid_level": "raid0", 00:10:40.690 "superblock": true, 00:10:40.690 "num_base_bdevs": 4, 00:10:40.690 "num_base_bdevs_discovered": 2, 00:10:40.690 "num_base_bdevs_operational": 4, 00:10:40.690 "base_bdevs_list": [ 00:10:40.690 { 00:10:40.690 "name": null, 00:10:40.690 "uuid": "0034324b-c8a9-40c3-8d10-82deee8be7dc", 00:10:40.690 "is_configured": false, 00:10:40.690 "data_offset": 0, 00:10:40.690 "data_size": 63488 00:10:40.690 }, 00:10:40.690 { 00:10:40.690 "name": null, 00:10:40.690 "uuid": "c9697da7-65ea-43ca-95cc-0cb6f017c84d", 00:10:40.690 "is_configured": false, 00:10:40.690 "data_offset": 0, 00:10:40.690 "data_size": 63488 00:10:40.690 }, 00:10:40.690 { 00:10:40.690 "name": "BaseBdev3", 00:10:40.690 "uuid": "1e2609a9-e6b1-4a9d-95f0-ce4b74f6ac2a", 00:10:40.690 "is_configured": true, 00:10:40.690 "data_offset": 2048, 00:10:40.690 "data_size": 63488 00:10:40.690 }, 00:10:40.690 { 00:10:40.690 "name": "BaseBdev4", 00:10:40.690 "uuid": "5ad77fee-afdf-47d0-bc99-bdbba87de886", 00:10:40.690 "is_configured": true, 00:10:40.690 "data_offset": 2048, 00:10:40.690 "data_size": 63488 00:10:40.690 } 00:10:40.690 ] 00:10:40.690 }' 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.690 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.950 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.950 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:40.950 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.950 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.210 [2024-10-13 02:24:59.676557] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.210 "name": "Existed_Raid", 00:10:41.210 "uuid": "38d01111-8528-4e69-8345-49234a9a8c47", 00:10:41.210 "strip_size_kb": 64, 00:10:41.210 "state": "configuring", 00:10:41.210 "raid_level": "raid0", 00:10:41.210 "superblock": true, 00:10:41.210 "num_base_bdevs": 4, 00:10:41.210 "num_base_bdevs_discovered": 3, 00:10:41.210 "num_base_bdevs_operational": 4, 00:10:41.210 "base_bdevs_list": [ 00:10:41.210 { 00:10:41.210 "name": null, 00:10:41.210 "uuid": "0034324b-c8a9-40c3-8d10-82deee8be7dc", 00:10:41.210 "is_configured": false, 00:10:41.210 "data_offset": 0, 00:10:41.210 "data_size": 63488 00:10:41.210 }, 00:10:41.210 { 00:10:41.210 "name": "BaseBdev2", 00:10:41.210 "uuid": "c9697da7-65ea-43ca-95cc-0cb6f017c84d", 00:10:41.210 "is_configured": true, 00:10:41.210 "data_offset": 2048, 00:10:41.210 "data_size": 63488 00:10:41.210 }, 00:10:41.210 { 00:10:41.210 "name": "BaseBdev3", 00:10:41.210 "uuid": "1e2609a9-e6b1-4a9d-95f0-ce4b74f6ac2a", 00:10:41.210 "is_configured": true, 00:10:41.210 "data_offset": 2048, 00:10:41.210 "data_size": 63488 00:10:41.210 }, 00:10:41.210 { 00:10:41.210 "name": "BaseBdev4", 00:10:41.210 "uuid": "5ad77fee-afdf-47d0-bc99-bdbba87de886", 00:10:41.210 "is_configured": true, 00:10:41.210 "data_offset": 2048, 00:10:41.210 "data_size": 63488 00:10:41.210 } 00:10:41.210 ] 00:10:41.210 }' 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.210 02:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.469 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.469 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:41.469 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.470 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.470 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0034324b-c8a9-40c3-8d10-82deee8be7dc 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.730 NewBaseBdev 00:10:41.730 [2024-10-13 02:25:00.201536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:41.730 [2024-10-13 02:25:00.201760] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:41.730 [2024-10-13 02:25:00.201776] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:41.730 [2024-10-13 02:25:00.202109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:41.730 [2024-10-13 02:25:00.202240] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:41.730 [2024-10-13 02:25:00.202252] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:41.730 [2024-10-13 02:25:00.202365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.730 [ 00:10:41.730 { 00:10:41.730 "name": "NewBaseBdev", 00:10:41.730 "aliases": [ 00:10:41.730 "0034324b-c8a9-40c3-8d10-82deee8be7dc" 00:10:41.730 ], 00:10:41.730 "product_name": "Malloc disk", 00:10:41.730 "block_size": 512, 00:10:41.730 "num_blocks": 65536, 00:10:41.730 "uuid": "0034324b-c8a9-40c3-8d10-82deee8be7dc", 00:10:41.730 "assigned_rate_limits": { 00:10:41.730 "rw_ios_per_sec": 0, 00:10:41.730 "rw_mbytes_per_sec": 0, 00:10:41.730 "r_mbytes_per_sec": 0, 00:10:41.730 "w_mbytes_per_sec": 0 00:10:41.730 }, 00:10:41.730 "claimed": true, 00:10:41.730 "claim_type": "exclusive_write", 00:10:41.730 "zoned": false, 00:10:41.730 "supported_io_types": { 00:10:41.730 "read": true, 00:10:41.730 "write": true, 00:10:41.730 "unmap": true, 00:10:41.730 "flush": true, 00:10:41.730 "reset": true, 00:10:41.730 "nvme_admin": false, 00:10:41.730 "nvme_io": false, 00:10:41.730 "nvme_io_md": false, 00:10:41.730 "write_zeroes": true, 00:10:41.730 "zcopy": true, 00:10:41.730 "get_zone_info": false, 00:10:41.730 "zone_management": false, 00:10:41.730 "zone_append": false, 00:10:41.730 "compare": false, 00:10:41.730 "compare_and_write": false, 00:10:41.730 "abort": true, 00:10:41.730 "seek_hole": false, 00:10:41.730 "seek_data": false, 00:10:41.730 "copy": true, 00:10:41.730 "nvme_iov_md": false 00:10:41.730 }, 00:10:41.730 "memory_domains": [ 00:10:41.730 { 00:10:41.730 "dma_device_id": "system", 00:10:41.730 "dma_device_type": 1 00:10:41.730 }, 00:10:41.730 { 00:10:41.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.730 "dma_device_type": 2 00:10:41.730 } 00:10:41.730 ], 00:10:41.730 "driver_specific": {} 00:10:41.730 } 00:10:41.730 ] 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.730 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.731 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.731 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.731 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.731 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.731 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.731 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.731 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.731 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.731 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.731 "name": "Existed_Raid", 00:10:41.731 "uuid": "38d01111-8528-4e69-8345-49234a9a8c47", 00:10:41.731 "strip_size_kb": 64, 00:10:41.731 "state": "online", 00:10:41.731 "raid_level": "raid0", 00:10:41.731 "superblock": true, 00:10:41.731 "num_base_bdevs": 4, 00:10:41.731 "num_base_bdevs_discovered": 4, 00:10:41.731 "num_base_bdevs_operational": 4, 00:10:41.731 "base_bdevs_list": [ 00:10:41.731 { 00:10:41.731 "name": "NewBaseBdev", 00:10:41.731 "uuid": "0034324b-c8a9-40c3-8d10-82deee8be7dc", 00:10:41.731 "is_configured": true, 00:10:41.731 "data_offset": 2048, 00:10:41.731 "data_size": 63488 00:10:41.731 }, 00:10:41.731 { 00:10:41.731 "name": "BaseBdev2", 00:10:41.731 "uuid": "c9697da7-65ea-43ca-95cc-0cb6f017c84d", 00:10:41.731 "is_configured": true, 00:10:41.731 "data_offset": 2048, 00:10:41.731 "data_size": 63488 00:10:41.731 }, 00:10:41.731 { 00:10:41.731 "name": "BaseBdev3", 00:10:41.731 "uuid": "1e2609a9-e6b1-4a9d-95f0-ce4b74f6ac2a", 00:10:41.731 "is_configured": true, 00:10:41.731 "data_offset": 2048, 00:10:41.731 "data_size": 63488 00:10:41.731 }, 00:10:41.731 { 00:10:41.731 "name": "BaseBdev4", 00:10:41.731 "uuid": "5ad77fee-afdf-47d0-bc99-bdbba87de886", 00:10:41.731 "is_configured": true, 00:10:41.731 "data_offset": 2048, 00:10:41.731 "data_size": 63488 00:10:41.731 } 00:10:41.731 ] 00:10:41.731 }' 00:10:41.731 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.731 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.991 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:41.991 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:41.991 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:41.991 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:41.991 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:41.991 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:41.991 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:41.991 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:41.991 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.991 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.251 [2024-10-13 02:25:00.673181] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:42.251 "name": "Existed_Raid", 00:10:42.251 "aliases": [ 00:10:42.251 "38d01111-8528-4e69-8345-49234a9a8c47" 00:10:42.251 ], 00:10:42.251 "product_name": "Raid Volume", 00:10:42.251 "block_size": 512, 00:10:42.251 "num_blocks": 253952, 00:10:42.251 "uuid": "38d01111-8528-4e69-8345-49234a9a8c47", 00:10:42.251 "assigned_rate_limits": { 00:10:42.251 "rw_ios_per_sec": 0, 00:10:42.251 "rw_mbytes_per_sec": 0, 00:10:42.251 "r_mbytes_per_sec": 0, 00:10:42.251 "w_mbytes_per_sec": 0 00:10:42.251 }, 00:10:42.251 "claimed": false, 00:10:42.251 "zoned": false, 00:10:42.251 "supported_io_types": { 00:10:42.251 "read": true, 00:10:42.251 "write": true, 00:10:42.251 "unmap": true, 00:10:42.251 "flush": true, 00:10:42.251 "reset": true, 00:10:42.251 "nvme_admin": false, 00:10:42.251 "nvme_io": false, 00:10:42.251 "nvme_io_md": false, 00:10:42.251 "write_zeroes": true, 00:10:42.251 "zcopy": false, 00:10:42.251 "get_zone_info": false, 00:10:42.251 "zone_management": false, 00:10:42.251 "zone_append": false, 00:10:42.251 "compare": false, 00:10:42.251 "compare_and_write": false, 00:10:42.251 "abort": false, 00:10:42.251 "seek_hole": false, 00:10:42.251 "seek_data": false, 00:10:42.251 "copy": false, 00:10:42.251 "nvme_iov_md": false 00:10:42.251 }, 00:10:42.251 "memory_domains": [ 00:10:42.251 { 00:10:42.251 "dma_device_id": "system", 00:10:42.251 "dma_device_type": 1 00:10:42.251 }, 00:10:42.251 { 00:10:42.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.251 "dma_device_type": 2 00:10:42.251 }, 00:10:42.251 { 00:10:42.251 "dma_device_id": "system", 00:10:42.251 "dma_device_type": 1 00:10:42.251 }, 00:10:42.251 { 00:10:42.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.251 "dma_device_type": 2 00:10:42.251 }, 00:10:42.251 { 00:10:42.251 "dma_device_id": "system", 00:10:42.251 "dma_device_type": 1 00:10:42.251 }, 00:10:42.251 { 00:10:42.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.251 "dma_device_type": 2 00:10:42.251 }, 00:10:42.251 { 00:10:42.251 "dma_device_id": "system", 00:10:42.251 "dma_device_type": 1 00:10:42.251 }, 00:10:42.251 { 00:10:42.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.251 "dma_device_type": 2 00:10:42.251 } 00:10:42.251 ], 00:10:42.251 "driver_specific": { 00:10:42.251 "raid": { 00:10:42.251 "uuid": "38d01111-8528-4e69-8345-49234a9a8c47", 00:10:42.251 "strip_size_kb": 64, 00:10:42.251 "state": "online", 00:10:42.251 "raid_level": "raid0", 00:10:42.251 "superblock": true, 00:10:42.251 "num_base_bdevs": 4, 00:10:42.251 "num_base_bdevs_discovered": 4, 00:10:42.251 "num_base_bdevs_operational": 4, 00:10:42.251 "base_bdevs_list": [ 00:10:42.251 { 00:10:42.251 "name": "NewBaseBdev", 00:10:42.251 "uuid": "0034324b-c8a9-40c3-8d10-82deee8be7dc", 00:10:42.251 "is_configured": true, 00:10:42.251 "data_offset": 2048, 00:10:42.251 "data_size": 63488 00:10:42.251 }, 00:10:42.251 { 00:10:42.251 "name": "BaseBdev2", 00:10:42.251 "uuid": "c9697da7-65ea-43ca-95cc-0cb6f017c84d", 00:10:42.251 "is_configured": true, 00:10:42.251 "data_offset": 2048, 00:10:42.251 "data_size": 63488 00:10:42.251 }, 00:10:42.251 { 00:10:42.251 "name": "BaseBdev3", 00:10:42.251 "uuid": "1e2609a9-e6b1-4a9d-95f0-ce4b74f6ac2a", 00:10:42.251 "is_configured": true, 00:10:42.251 "data_offset": 2048, 00:10:42.251 "data_size": 63488 00:10:42.251 }, 00:10:42.251 { 00:10:42.251 "name": "BaseBdev4", 00:10:42.251 "uuid": "5ad77fee-afdf-47d0-bc99-bdbba87de886", 00:10:42.251 "is_configured": true, 00:10:42.251 "data_offset": 2048, 00:10:42.251 "data_size": 63488 00:10:42.251 } 00:10:42.251 ] 00:10:42.251 } 00:10:42.251 } 00:10:42.251 }' 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:42.251 BaseBdev2 00:10:42.251 BaseBdev3 00:10:42.251 BaseBdev4' 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.251 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.512 [2024-10-13 02:25:00.980253] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.512 [2024-10-13 02:25:00.980334] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.512 [2024-10-13 02:25:00.980454] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.512 [2024-10-13 02:25:00.980563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.512 [2024-10-13 02:25:00.980609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80904 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80904 ']' 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80904 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:42.512 02:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80904 00:10:42.512 02:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:42.512 02:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:42.512 02:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80904' 00:10:42.512 killing process with pid 80904 00:10:42.512 02:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80904 00:10:42.512 [2024-10-13 02:25:01.030559] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.512 02:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80904 00:10:42.512 [2024-10-13 02:25:01.113570] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:43.082 02:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:43.082 00:10:43.082 real 0m9.696s 00:10:43.082 user 0m16.162s 00:10:43.082 sys 0m2.119s 00:10:43.082 ************************************ 00:10:43.082 END TEST raid_state_function_test_sb 00:10:43.082 ************************************ 00:10:43.082 02:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.082 02:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.083 02:25:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:43.083 02:25:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:43.083 02:25:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.083 02:25:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:43.083 ************************************ 00:10:43.083 START TEST raid_superblock_test 00:10:43.083 ************************************ 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81552 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81552 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81552 ']' 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.083 02:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.083 [2024-10-13 02:25:01.662049] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:43.083 [2024-10-13 02:25:01.662360] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81552 ] 00:10:43.343 [2024-10-13 02:25:01.798827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.343 [2024-10-13 02:25:01.876499] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.343 [2024-10-13 02:25:01.958718] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.343 [2024-10-13 02:25:01.958910] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.913 malloc1 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.913 [2024-10-13 02:25:02.533002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:43.913 [2024-10-13 02:25:02.533117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.913 [2024-10-13 02:25:02.533163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:43.913 [2024-10-13 02:25:02.533204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.913 [2024-10-13 02:25:02.535717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.913 [2024-10-13 02:25:02.535797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:43.913 pt1 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.913 malloc2 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.913 [2024-10-13 02:25:02.582338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:43.913 [2024-10-13 02:25:02.582446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.913 [2024-10-13 02:25:02.582484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:43.913 [2024-10-13 02:25:02.582516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.913 [2024-10-13 02:25:02.585075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.913 [2024-10-13 02:25:02.585154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:43.913 pt2 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.913 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.174 malloc3 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.174 [2024-10-13 02:25:02.617941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:44.174 [2024-10-13 02:25:02.618052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.174 [2024-10-13 02:25:02.618094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:44.174 [2024-10-13 02:25:02.618129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.174 [2024-10-13 02:25:02.620679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.174 [2024-10-13 02:25:02.620759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:44.174 pt3 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.174 malloc4 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.174 [2024-10-13 02:25:02.657218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:44.174 [2024-10-13 02:25:02.657323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.174 [2024-10-13 02:25:02.657360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:44.174 [2024-10-13 02:25:02.657393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.174 [2024-10-13 02:25:02.660016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.174 [2024-10-13 02:25:02.660105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:44.174 pt4 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.174 [2024-10-13 02:25:02.669231] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:44.174 [2024-10-13 02:25:02.671477] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:44.174 [2024-10-13 02:25:02.671551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:44.174 [2024-10-13 02:25:02.671596] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:44.174 [2024-10-13 02:25:02.671758] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:44.174 [2024-10-13 02:25:02.671773] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:44.174 [2024-10-13 02:25:02.672062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:44.174 [2024-10-13 02:25:02.672226] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:44.174 [2024-10-13 02:25:02.672237] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:44.174 [2024-10-13 02:25:02.672379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:44.174 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.175 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.175 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.175 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.175 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.175 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.175 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.175 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.175 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.175 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.175 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.175 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.175 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.175 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.175 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.175 "name": "raid_bdev1", 00:10:44.175 "uuid": "9e6a25dd-5fc5-4a21-8afe-9e30fefc9019", 00:10:44.175 "strip_size_kb": 64, 00:10:44.175 "state": "online", 00:10:44.175 "raid_level": "raid0", 00:10:44.175 "superblock": true, 00:10:44.175 "num_base_bdevs": 4, 00:10:44.175 "num_base_bdevs_discovered": 4, 00:10:44.175 "num_base_bdevs_operational": 4, 00:10:44.175 "base_bdevs_list": [ 00:10:44.175 { 00:10:44.175 "name": "pt1", 00:10:44.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.175 "is_configured": true, 00:10:44.175 "data_offset": 2048, 00:10:44.175 "data_size": 63488 00:10:44.175 }, 00:10:44.175 { 00:10:44.175 "name": "pt2", 00:10:44.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.175 "is_configured": true, 00:10:44.175 "data_offset": 2048, 00:10:44.175 "data_size": 63488 00:10:44.175 }, 00:10:44.175 { 00:10:44.175 "name": "pt3", 00:10:44.175 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.175 "is_configured": true, 00:10:44.175 "data_offset": 2048, 00:10:44.175 "data_size": 63488 00:10:44.175 }, 00:10:44.175 { 00:10:44.175 "name": "pt4", 00:10:44.175 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:44.175 "is_configured": true, 00:10:44.175 "data_offset": 2048, 00:10:44.175 "data_size": 63488 00:10:44.175 } 00:10:44.175 ] 00:10:44.175 }' 00:10:44.175 02:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.175 02:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.746 [2024-10-13 02:25:03.148756] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:44.746 "name": "raid_bdev1", 00:10:44.746 "aliases": [ 00:10:44.746 "9e6a25dd-5fc5-4a21-8afe-9e30fefc9019" 00:10:44.746 ], 00:10:44.746 "product_name": "Raid Volume", 00:10:44.746 "block_size": 512, 00:10:44.746 "num_blocks": 253952, 00:10:44.746 "uuid": "9e6a25dd-5fc5-4a21-8afe-9e30fefc9019", 00:10:44.746 "assigned_rate_limits": { 00:10:44.746 "rw_ios_per_sec": 0, 00:10:44.746 "rw_mbytes_per_sec": 0, 00:10:44.746 "r_mbytes_per_sec": 0, 00:10:44.746 "w_mbytes_per_sec": 0 00:10:44.746 }, 00:10:44.746 "claimed": false, 00:10:44.746 "zoned": false, 00:10:44.746 "supported_io_types": { 00:10:44.746 "read": true, 00:10:44.746 "write": true, 00:10:44.746 "unmap": true, 00:10:44.746 "flush": true, 00:10:44.746 "reset": true, 00:10:44.746 "nvme_admin": false, 00:10:44.746 "nvme_io": false, 00:10:44.746 "nvme_io_md": false, 00:10:44.746 "write_zeroes": true, 00:10:44.746 "zcopy": false, 00:10:44.746 "get_zone_info": false, 00:10:44.746 "zone_management": false, 00:10:44.746 "zone_append": false, 00:10:44.746 "compare": false, 00:10:44.746 "compare_and_write": false, 00:10:44.746 "abort": false, 00:10:44.746 "seek_hole": false, 00:10:44.746 "seek_data": false, 00:10:44.746 "copy": false, 00:10:44.746 "nvme_iov_md": false 00:10:44.746 }, 00:10:44.746 "memory_domains": [ 00:10:44.746 { 00:10:44.746 "dma_device_id": "system", 00:10:44.746 "dma_device_type": 1 00:10:44.746 }, 00:10:44.746 { 00:10:44.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.746 "dma_device_type": 2 00:10:44.746 }, 00:10:44.746 { 00:10:44.746 "dma_device_id": "system", 00:10:44.746 "dma_device_type": 1 00:10:44.746 }, 00:10:44.746 { 00:10:44.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.746 "dma_device_type": 2 00:10:44.746 }, 00:10:44.746 { 00:10:44.746 "dma_device_id": "system", 00:10:44.746 "dma_device_type": 1 00:10:44.746 }, 00:10:44.746 { 00:10:44.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.746 "dma_device_type": 2 00:10:44.746 }, 00:10:44.746 { 00:10:44.746 "dma_device_id": "system", 00:10:44.746 "dma_device_type": 1 00:10:44.746 }, 00:10:44.746 { 00:10:44.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.746 "dma_device_type": 2 00:10:44.746 } 00:10:44.746 ], 00:10:44.746 "driver_specific": { 00:10:44.746 "raid": { 00:10:44.746 "uuid": "9e6a25dd-5fc5-4a21-8afe-9e30fefc9019", 00:10:44.746 "strip_size_kb": 64, 00:10:44.746 "state": "online", 00:10:44.746 "raid_level": "raid0", 00:10:44.746 "superblock": true, 00:10:44.746 "num_base_bdevs": 4, 00:10:44.746 "num_base_bdevs_discovered": 4, 00:10:44.746 "num_base_bdevs_operational": 4, 00:10:44.746 "base_bdevs_list": [ 00:10:44.746 { 00:10:44.746 "name": "pt1", 00:10:44.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.746 "is_configured": true, 00:10:44.746 "data_offset": 2048, 00:10:44.746 "data_size": 63488 00:10:44.746 }, 00:10:44.746 { 00:10:44.746 "name": "pt2", 00:10:44.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.746 "is_configured": true, 00:10:44.746 "data_offset": 2048, 00:10:44.746 "data_size": 63488 00:10:44.746 }, 00:10:44.746 { 00:10:44.746 "name": "pt3", 00:10:44.746 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.746 "is_configured": true, 00:10:44.746 "data_offset": 2048, 00:10:44.746 "data_size": 63488 00:10:44.746 }, 00:10:44.746 { 00:10:44.746 "name": "pt4", 00:10:44.746 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:44.746 "is_configured": true, 00:10:44.746 "data_offset": 2048, 00:10:44.746 "data_size": 63488 00:10:44.746 } 00:10:44.746 ] 00:10:44.746 } 00:10:44.746 } 00:10:44.746 }' 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:44.746 pt2 00:10:44.746 pt3 00:10:44.746 pt4' 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.746 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:45.006 [2024-10-13 02:25:03.488132] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9e6a25dd-5fc5-4a21-8afe-9e30fefc9019 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9e6a25dd-5fc5-4a21-8afe-9e30fefc9019 ']' 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.006 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.007 [2024-10-13 02:25:03.535730] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.007 [2024-10-13 02:25:03.535826] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.007 [2024-10-13 02:25:03.535983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.007 [2024-10-13 02:25:03.536118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.007 [2024-10-13 02:25:03.536185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.007 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.267 [2024-10-13 02:25:03.707454] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:45.267 [2024-10-13 02:25:03.709792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:45.267 [2024-10-13 02:25:03.709914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:45.267 [2024-10-13 02:25:03.709986] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:45.267 [2024-10-13 02:25:03.710126] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:45.267 [2024-10-13 02:25:03.710217] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:45.267 [2024-10-13 02:25:03.710290] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:45.267 [2024-10-13 02:25:03.710333] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:45.267 [2024-10-13 02:25:03.710349] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.267 [2024-10-13 02:25:03.710360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:10:45.267 request: 00:10:45.267 { 00:10:45.267 "name": "raid_bdev1", 00:10:45.267 "raid_level": "raid0", 00:10:45.267 "base_bdevs": [ 00:10:45.267 "malloc1", 00:10:45.267 "malloc2", 00:10:45.267 "malloc3", 00:10:45.267 "malloc4" 00:10:45.267 ], 00:10:45.267 "strip_size_kb": 64, 00:10:45.267 "superblock": false, 00:10:45.267 "method": "bdev_raid_create", 00:10:45.267 "req_id": 1 00:10:45.267 } 00:10:45.267 Got JSON-RPC error response 00:10:45.267 response: 00:10:45.267 { 00:10:45.267 "code": -17, 00:10:45.267 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:45.267 } 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:45.267 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.268 [2024-10-13 02:25:03.775288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:45.268 [2024-10-13 02:25:03.775428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.268 [2024-10-13 02:25:03.775479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:45.268 [2024-10-13 02:25:03.775538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.268 [2024-10-13 02:25:03.778244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.268 [2024-10-13 02:25:03.778318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:45.268 [2024-10-13 02:25:03.778458] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:45.268 [2024-10-13 02:25:03.778523] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:45.268 pt1 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.268 "name": "raid_bdev1", 00:10:45.268 "uuid": "9e6a25dd-5fc5-4a21-8afe-9e30fefc9019", 00:10:45.268 "strip_size_kb": 64, 00:10:45.268 "state": "configuring", 00:10:45.268 "raid_level": "raid0", 00:10:45.268 "superblock": true, 00:10:45.268 "num_base_bdevs": 4, 00:10:45.268 "num_base_bdevs_discovered": 1, 00:10:45.268 "num_base_bdevs_operational": 4, 00:10:45.268 "base_bdevs_list": [ 00:10:45.268 { 00:10:45.268 "name": "pt1", 00:10:45.268 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.268 "is_configured": true, 00:10:45.268 "data_offset": 2048, 00:10:45.268 "data_size": 63488 00:10:45.268 }, 00:10:45.268 { 00:10:45.268 "name": null, 00:10:45.268 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.268 "is_configured": false, 00:10:45.268 "data_offset": 2048, 00:10:45.268 "data_size": 63488 00:10:45.268 }, 00:10:45.268 { 00:10:45.268 "name": null, 00:10:45.268 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.268 "is_configured": false, 00:10:45.268 "data_offset": 2048, 00:10:45.268 "data_size": 63488 00:10:45.268 }, 00:10:45.268 { 00:10:45.268 "name": null, 00:10:45.268 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:45.268 "is_configured": false, 00:10:45.268 "data_offset": 2048, 00:10:45.268 "data_size": 63488 00:10:45.268 } 00:10:45.268 ] 00:10:45.268 }' 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.268 02:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.837 [2024-10-13 02:25:04.242522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:45.837 [2024-10-13 02:25:04.242663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.837 [2024-10-13 02:25:04.242720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:45.837 [2024-10-13 02:25:04.242732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.837 [2024-10-13 02:25:04.243333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.837 [2024-10-13 02:25:04.243358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:45.837 [2024-10-13 02:25:04.243463] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:45.837 [2024-10-13 02:25:04.243503] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:45.837 pt2 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.837 [2024-10-13 02:25:04.254528] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.837 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.838 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.838 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.838 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.838 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.838 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.838 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.838 "name": "raid_bdev1", 00:10:45.838 "uuid": "9e6a25dd-5fc5-4a21-8afe-9e30fefc9019", 00:10:45.838 "strip_size_kb": 64, 00:10:45.838 "state": "configuring", 00:10:45.838 "raid_level": "raid0", 00:10:45.838 "superblock": true, 00:10:45.838 "num_base_bdevs": 4, 00:10:45.838 "num_base_bdevs_discovered": 1, 00:10:45.838 "num_base_bdevs_operational": 4, 00:10:45.838 "base_bdevs_list": [ 00:10:45.838 { 00:10:45.838 "name": "pt1", 00:10:45.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.838 "is_configured": true, 00:10:45.838 "data_offset": 2048, 00:10:45.838 "data_size": 63488 00:10:45.838 }, 00:10:45.838 { 00:10:45.838 "name": null, 00:10:45.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.838 "is_configured": false, 00:10:45.838 "data_offset": 0, 00:10:45.838 "data_size": 63488 00:10:45.838 }, 00:10:45.838 { 00:10:45.838 "name": null, 00:10:45.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.838 "is_configured": false, 00:10:45.838 "data_offset": 2048, 00:10:45.838 "data_size": 63488 00:10:45.838 }, 00:10:45.838 { 00:10:45.838 "name": null, 00:10:45.838 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:45.838 "is_configured": false, 00:10:45.838 "data_offset": 2048, 00:10:45.838 "data_size": 63488 00:10:45.838 } 00:10:45.838 ] 00:10:45.838 }' 00:10:45.838 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.838 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.098 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:46.098 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.098 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:46.098 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.098 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.098 [2024-10-13 02:25:04.669851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:46.098 [2024-10-13 02:25:04.670008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.098 [2024-10-13 02:25:04.670062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:46.098 [2024-10-13 02:25:04.670100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.098 [2024-10-13 02:25:04.670696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.098 [2024-10-13 02:25:04.670775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:46.098 [2024-10-13 02:25:04.670941] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:46.098 [2024-10-13 02:25:04.671023] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:46.098 pt2 00:10:46.098 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.098 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:46.098 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.098 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:46.098 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.098 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.098 [2024-10-13 02:25:04.681737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:46.098 [2024-10-13 02:25:04.681839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.098 [2024-10-13 02:25:04.681864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:46.098 [2024-10-13 02:25:04.681919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.098 [2024-10-13 02:25:04.682363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.098 [2024-10-13 02:25:04.682391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:46.098 [2024-10-13 02:25:04.682467] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:46.098 [2024-10-13 02:25:04.682492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:46.098 pt3 00:10:46.098 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.098 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:46.098 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.098 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:46.098 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.098 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.099 [2024-10-13 02:25:04.693738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:46.099 [2024-10-13 02:25:04.693793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.099 [2024-10-13 02:25:04.693810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:46.099 [2024-10-13 02:25:04.693822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.099 [2024-10-13 02:25:04.694241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.099 [2024-10-13 02:25:04.694264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:46.099 [2024-10-13 02:25:04.694325] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:46.099 [2024-10-13 02:25:04.694352] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:46.099 [2024-10-13 02:25:04.694464] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:46.099 [2024-10-13 02:25:04.694476] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:46.099 [2024-10-13 02:25:04.694743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:46.099 [2024-10-13 02:25:04.694878] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:46.099 [2024-10-13 02:25:04.695007] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:10:46.099 [2024-10-13 02:25:04.695221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.099 pt4 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.099 "name": "raid_bdev1", 00:10:46.099 "uuid": "9e6a25dd-5fc5-4a21-8afe-9e30fefc9019", 00:10:46.099 "strip_size_kb": 64, 00:10:46.099 "state": "online", 00:10:46.099 "raid_level": "raid0", 00:10:46.099 "superblock": true, 00:10:46.099 "num_base_bdevs": 4, 00:10:46.099 "num_base_bdevs_discovered": 4, 00:10:46.099 "num_base_bdevs_operational": 4, 00:10:46.099 "base_bdevs_list": [ 00:10:46.099 { 00:10:46.099 "name": "pt1", 00:10:46.099 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.099 "is_configured": true, 00:10:46.099 "data_offset": 2048, 00:10:46.099 "data_size": 63488 00:10:46.099 }, 00:10:46.099 { 00:10:46.099 "name": "pt2", 00:10:46.099 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.099 "is_configured": true, 00:10:46.099 "data_offset": 2048, 00:10:46.099 "data_size": 63488 00:10:46.099 }, 00:10:46.099 { 00:10:46.099 "name": "pt3", 00:10:46.099 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.099 "is_configured": true, 00:10:46.099 "data_offset": 2048, 00:10:46.099 "data_size": 63488 00:10:46.099 }, 00:10:46.099 { 00:10:46.099 "name": "pt4", 00:10:46.099 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:46.099 "is_configured": true, 00:10:46.099 "data_offset": 2048, 00:10:46.099 "data_size": 63488 00:10:46.099 } 00:10:46.099 ] 00:10:46.099 }' 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.099 02:25:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.669 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:46.669 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:46.669 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.669 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.669 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.669 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.669 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:46.669 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.669 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.669 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.669 [2024-10-13 02:25:05.157384] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.669 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.669 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.669 "name": "raid_bdev1", 00:10:46.669 "aliases": [ 00:10:46.669 "9e6a25dd-5fc5-4a21-8afe-9e30fefc9019" 00:10:46.669 ], 00:10:46.669 "product_name": "Raid Volume", 00:10:46.669 "block_size": 512, 00:10:46.669 "num_blocks": 253952, 00:10:46.669 "uuid": "9e6a25dd-5fc5-4a21-8afe-9e30fefc9019", 00:10:46.669 "assigned_rate_limits": { 00:10:46.669 "rw_ios_per_sec": 0, 00:10:46.669 "rw_mbytes_per_sec": 0, 00:10:46.669 "r_mbytes_per_sec": 0, 00:10:46.669 "w_mbytes_per_sec": 0 00:10:46.669 }, 00:10:46.669 "claimed": false, 00:10:46.669 "zoned": false, 00:10:46.669 "supported_io_types": { 00:10:46.669 "read": true, 00:10:46.669 "write": true, 00:10:46.669 "unmap": true, 00:10:46.669 "flush": true, 00:10:46.669 "reset": true, 00:10:46.669 "nvme_admin": false, 00:10:46.669 "nvme_io": false, 00:10:46.669 "nvme_io_md": false, 00:10:46.669 "write_zeroes": true, 00:10:46.669 "zcopy": false, 00:10:46.669 "get_zone_info": false, 00:10:46.669 "zone_management": false, 00:10:46.669 "zone_append": false, 00:10:46.669 "compare": false, 00:10:46.669 "compare_and_write": false, 00:10:46.669 "abort": false, 00:10:46.669 "seek_hole": false, 00:10:46.669 "seek_data": false, 00:10:46.669 "copy": false, 00:10:46.669 "nvme_iov_md": false 00:10:46.669 }, 00:10:46.669 "memory_domains": [ 00:10:46.669 { 00:10:46.669 "dma_device_id": "system", 00:10:46.669 "dma_device_type": 1 00:10:46.669 }, 00:10:46.669 { 00:10:46.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.669 "dma_device_type": 2 00:10:46.669 }, 00:10:46.669 { 00:10:46.669 "dma_device_id": "system", 00:10:46.669 "dma_device_type": 1 00:10:46.669 }, 00:10:46.669 { 00:10:46.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.669 "dma_device_type": 2 00:10:46.669 }, 00:10:46.669 { 00:10:46.669 "dma_device_id": "system", 00:10:46.669 "dma_device_type": 1 00:10:46.669 }, 00:10:46.669 { 00:10:46.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.669 "dma_device_type": 2 00:10:46.669 }, 00:10:46.669 { 00:10:46.669 "dma_device_id": "system", 00:10:46.669 "dma_device_type": 1 00:10:46.669 }, 00:10:46.669 { 00:10:46.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.669 "dma_device_type": 2 00:10:46.669 } 00:10:46.669 ], 00:10:46.669 "driver_specific": { 00:10:46.669 "raid": { 00:10:46.669 "uuid": "9e6a25dd-5fc5-4a21-8afe-9e30fefc9019", 00:10:46.669 "strip_size_kb": 64, 00:10:46.669 "state": "online", 00:10:46.669 "raid_level": "raid0", 00:10:46.669 "superblock": true, 00:10:46.669 "num_base_bdevs": 4, 00:10:46.669 "num_base_bdevs_discovered": 4, 00:10:46.669 "num_base_bdevs_operational": 4, 00:10:46.669 "base_bdevs_list": [ 00:10:46.669 { 00:10:46.669 "name": "pt1", 00:10:46.669 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.669 "is_configured": true, 00:10:46.669 "data_offset": 2048, 00:10:46.669 "data_size": 63488 00:10:46.669 }, 00:10:46.669 { 00:10:46.669 "name": "pt2", 00:10:46.669 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.669 "is_configured": true, 00:10:46.669 "data_offset": 2048, 00:10:46.669 "data_size": 63488 00:10:46.669 }, 00:10:46.669 { 00:10:46.669 "name": "pt3", 00:10:46.669 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.669 "is_configured": true, 00:10:46.669 "data_offset": 2048, 00:10:46.669 "data_size": 63488 00:10:46.669 }, 00:10:46.669 { 00:10:46.669 "name": "pt4", 00:10:46.669 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:46.669 "is_configured": true, 00:10:46.669 "data_offset": 2048, 00:10:46.669 "data_size": 63488 00:10:46.669 } 00:10:46.669 ] 00:10:46.669 } 00:10:46.669 } 00:10:46.669 }' 00:10:46.670 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.670 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:46.670 pt2 00:10:46.670 pt3 00:10:46.670 pt4' 00:10:46.670 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.670 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.670 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.670 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:46.670 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.670 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.670 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.670 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.670 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.670 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.670 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.670 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:46.670 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.670 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.670 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.670 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:46.930 [2024-10-13 02:25:05.484813] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9e6a25dd-5fc5-4a21-8afe-9e30fefc9019 '!=' 9e6a25dd-5fc5-4a21-8afe-9e30fefc9019 ']' 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81552 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81552 ']' 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81552 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81552 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81552' 00:10:46.930 killing process with pid 81552 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81552 00:10:46.930 [2024-10-13 02:25:05.567370] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.930 02:25:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81552 00:10:46.930 [2024-10-13 02:25:05.567575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.930 [2024-10-13 02:25:05.567661] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.930 [2024-10-13 02:25:05.567680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:10:47.190 [2024-10-13 02:25:05.655200] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.461 02:25:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:47.461 00:10:47.461 real 0m4.481s 00:10:47.461 user 0m6.802s 00:10:47.461 sys 0m1.075s 00:10:47.461 02:25:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.461 02:25:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.461 ************************************ 00:10:47.461 END TEST raid_superblock_test 00:10:47.461 ************************************ 00:10:47.462 02:25:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:47.462 02:25:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:47.462 02:25:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:47.462 02:25:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.462 ************************************ 00:10:47.462 START TEST raid_read_error_test 00:10:47.462 ************************************ 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:47.462 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:47.740 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:47.740 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cNyO5qYGj8 00:10:47.740 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81806 00:10:47.740 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:47.740 02:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81806 00:10:47.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.740 02:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 81806 ']' 00:10:47.740 02:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.740 02:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:47.740 02:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.740 02:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:47.740 02:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.740 [2024-10-13 02:25:06.234115] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:47.740 [2024-10-13 02:25:06.234267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81806 ] 00:10:47.740 [2024-10-13 02:25:06.382641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.000 [2024-10-13 02:25:06.461364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.000 [2024-10-13 02:25:06.543669] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.000 [2024-10-13 02:25:06.543829] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.570 BaseBdev1_malloc 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.570 true 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.570 [2024-10-13 02:25:07.128701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:48.570 [2024-10-13 02:25:07.128828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.570 [2024-10-13 02:25:07.128885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:48.570 [2024-10-13 02:25:07.128919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.570 [2024-10-13 02:25:07.131562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.570 [2024-10-13 02:25:07.131654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:48.570 BaseBdev1 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.570 BaseBdev2_malloc 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.570 true 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.570 [2024-10-13 02:25:07.187066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:48.570 [2024-10-13 02:25:07.187132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.570 [2024-10-13 02:25:07.187156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:48.570 [2024-10-13 02:25:07.187165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.570 [2024-10-13 02:25:07.189663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.570 [2024-10-13 02:25:07.189701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:48.570 BaseBdev2 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.570 BaseBdev3_malloc 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.570 true 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.570 [2024-10-13 02:25:07.234815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:48.570 [2024-10-13 02:25:07.234951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.570 [2024-10-13 02:25:07.234982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:48.570 [2024-10-13 02:25:07.234993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.570 [2024-10-13 02:25:07.237452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.570 [2024-10-13 02:25:07.237491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:48.570 BaseBdev3 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.570 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.830 BaseBdev4_malloc 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.830 true 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.830 [2024-10-13 02:25:07.283072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:48.830 [2024-10-13 02:25:07.283130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.830 [2024-10-13 02:25:07.283174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:48.830 [2024-10-13 02:25:07.283185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.830 [2024-10-13 02:25:07.285783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.830 [2024-10-13 02:25:07.285823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:48.830 BaseBdev4 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.830 [2024-10-13 02:25:07.295147] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.830 [2024-10-13 02:25:07.297511] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.830 [2024-10-13 02:25:07.297666] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.830 [2024-10-13 02:25:07.297748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:48.830 [2024-10-13 02:25:07.298003] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:48.830 [2024-10-13 02:25:07.298018] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:48.830 [2024-10-13 02:25:07.298347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:48.830 [2024-10-13 02:25:07.298500] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:48.830 [2024-10-13 02:25:07.298514] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:48.830 [2024-10-13 02:25:07.298670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.830 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.830 "name": "raid_bdev1", 00:10:48.830 "uuid": "d7feb0f8-49fa-4a59-b63b-a2356dc58128", 00:10:48.830 "strip_size_kb": 64, 00:10:48.830 "state": "online", 00:10:48.831 "raid_level": "raid0", 00:10:48.831 "superblock": true, 00:10:48.831 "num_base_bdevs": 4, 00:10:48.831 "num_base_bdevs_discovered": 4, 00:10:48.831 "num_base_bdevs_operational": 4, 00:10:48.831 "base_bdevs_list": [ 00:10:48.831 { 00:10:48.831 "name": "BaseBdev1", 00:10:48.831 "uuid": "d9e1d8ad-6ea9-5a0f-9d8c-10b09a51d2be", 00:10:48.831 "is_configured": true, 00:10:48.831 "data_offset": 2048, 00:10:48.831 "data_size": 63488 00:10:48.831 }, 00:10:48.831 { 00:10:48.831 "name": "BaseBdev2", 00:10:48.831 "uuid": "e11809e7-1dae-5626-bd7b-81be91860dff", 00:10:48.831 "is_configured": true, 00:10:48.831 "data_offset": 2048, 00:10:48.831 "data_size": 63488 00:10:48.831 }, 00:10:48.831 { 00:10:48.831 "name": "BaseBdev3", 00:10:48.831 "uuid": "acbd4012-c991-54b9-96fb-9c75545e15df", 00:10:48.831 "is_configured": true, 00:10:48.831 "data_offset": 2048, 00:10:48.831 "data_size": 63488 00:10:48.831 }, 00:10:48.831 { 00:10:48.831 "name": "BaseBdev4", 00:10:48.831 "uuid": "54c97187-9c4b-5550-9b12-94b57f58b0b0", 00:10:48.831 "is_configured": true, 00:10:48.831 "data_offset": 2048, 00:10:48.831 "data_size": 63488 00:10:48.831 } 00:10:48.831 ] 00:10:48.831 }' 00:10:48.831 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.831 02:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.090 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:49.090 02:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:49.355 [2024-10-13 02:25:07.871031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:50.299 02:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:50.299 02:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.299 02:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.299 02:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.299 02:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:50.299 02:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:50.299 02:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:50.299 02:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:50.299 02:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.300 02:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.300 02:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.300 02:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.300 02:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.300 02:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.300 02:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.300 02:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.300 02:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.300 02:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.300 02:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.300 02:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.300 02:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.300 02:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.300 02:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.300 "name": "raid_bdev1", 00:10:50.300 "uuid": "d7feb0f8-49fa-4a59-b63b-a2356dc58128", 00:10:50.300 "strip_size_kb": 64, 00:10:50.300 "state": "online", 00:10:50.300 "raid_level": "raid0", 00:10:50.300 "superblock": true, 00:10:50.300 "num_base_bdevs": 4, 00:10:50.300 "num_base_bdevs_discovered": 4, 00:10:50.300 "num_base_bdevs_operational": 4, 00:10:50.300 "base_bdevs_list": [ 00:10:50.300 { 00:10:50.300 "name": "BaseBdev1", 00:10:50.300 "uuid": "d9e1d8ad-6ea9-5a0f-9d8c-10b09a51d2be", 00:10:50.300 "is_configured": true, 00:10:50.300 "data_offset": 2048, 00:10:50.300 "data_size": 63488 00:10:50.300 }, 00:10:50.300 { 00:10:50.300 "name": "BaseBdev2", 00:10:50.300 "uuid": "e11809e7-1dae-5626-bd7b-81be91860dff", 00:10:50.300 "is_configured": true, 00:10:50.300 "data_offset": 2048, 00:10:50.300 "data_size": 63488 00:10:50.300 }, 00:10:50.300 { 00:10:50.300 "name": "BaseBdev3", 00:10:50.300 "uuid": "acbd4012-c991-54b9-96fb-9c75545e15df", 00:10:50.300 "is_configured": true, 00:10:50.300 "data_offset": 2048, 00:10:50.300 "data_size": 63488 00:10:50.300 }, 00:10:50.300 { 00:10:50.300 "name": "BaseBdev4", 00:10:50.300 "uuid": "54c97187-9c4b-5550-9b12-94b57f58b0b0", 00:10:50.300 "is_configured": true, 00:10:50.300 "data_offset": 2048, 00:10:50.300 "data_size": 63488 00:10:50.300 } 00:10:50.300 ] 00:10:50.300 }' 00:10:50.300 02:25:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.300 02:25:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.872 02:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:50.872 02:25:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.872 02:25:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.872 [2024-10-13 02:25:09.260706] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.872 [2024-10-13 02:25:09.260745] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.872 [2024-10-13 02:25:09.263567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.872 [2024-10-13 02:25:09.263679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.872 [2024-10-13 02:25:09.263760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.872 [2024-10-13 02:25:09.263808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:50.872 02:25:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.872 { 00:10:50.872 "results": [ 00:10:50.872 { 00:10:50.872 "job": "raid_bdev1", 00:10:50.872 "core_mask": "0x1", 00:10:50.872 "workload": "randrw", 00:10:50.872 "percentage": 50, 00:10:50.872 "status": "finished", 00:10:50.872 "queue_depth": 1, 00:10:50.872 "io_size": 131072, 00:10:50.872 "runtime": 1.389925, 00:10:50.872 "iops": 13533.104304189075, 00:10:50.872 "mibps": 1691.6380380236344, 00:10:50.872 "io_failed": 1, 00:10:50.872 "io_timeout": 0, 00:10:50.872 "avg_latency_us": 103.90890585017267, 00:10:50.872 "min_latency_us": 27.165065502183406, 00:10:50.872 "max_latency_us": 1488.1537117903931 00:10:50.872 } 00:10:50.872 ], 00:10:50.872 "core_count": 1 00:10:50.872 } 00:10:50.872 02:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81806 00:10:50.872 02:25:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 81806 ']' 00:10:50.872 02:25:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 81806 00:10:50.872 02:25:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:50.872 02:25:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:50.872 02:25:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81806 00:10:50.872 02:25:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:50.872 killing process with pid 81806 00:10:50.872 02:25:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:50.872 02:25:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81806' 00:10:50.872 02:25:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 81806 00:10:50.872 [2024-10-13 02:25:09.302459] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.872 02:25:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 81806 00:10:50.872 [2024-10-13 02:25:09.374464] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:51.132 02:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cNyO5qYGj8 00:10:51.132 02:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:51.132 02:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:51.132 02:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:51.132 02:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:51.132 02:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:51.132 02:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:51.132 02:25:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:51.132 00:10:51.132 real 0m3.651s 00:10:51.132 user 0m4.435s 00:10:51.132 sys 0m0.718s 00:10:51.132 ************************************ 00:10:51.132 END TEST raid_read_error_test 00:10:51.132 ************************************ 00:10:51.132 02:25:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:51.132 02:25:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.391 02:25:09 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:51.391 02:25:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:51.391 02:25:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:51.392 02:25:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:51.392 ************************************ 00:10:51.392 START TEST raid_write_error_test 00:10:51.392 ************************************ 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wE2hTkxQf7 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81936 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81936 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 81936 ']' 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:51.392 02:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.392 [2024-10-13 02:25:09.955358] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:51.392 [2024-10-13 02:25:09.955626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81936 ] 00:10:51.651 [2024-10-13 02:25:10.104462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.651 [2024-10-13 02:25:10.186159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.651 [2024-10-13 02:25:10.270141] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.651 [2024-10-13 02:25:10.270292] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.220 BaseBdev1_malloc 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.220 true 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.220 [2024-10-13 02:25:10.858423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:52.220 [2024-10-13 02:25:10.858489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.220 [2024-10-13 02:25:10.858532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:52.220 [2024-10-13 02:25:10.858541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.220 [2024-10-13 02:25:10.861224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.220 [2024-10-13 02:25:10.861261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:52.220 BaseBdev1 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.220 BaseBdev2_malloc 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.220 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.481 true 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.481 [2024-10-13 02:25:10.917388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:52.481 [2024-10-13 02:25:10.917472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.481 [2024-10-13 02:25:10.917495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:52.481 [2024-10-13 02:25:10.917505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.481 [2024-10-13 02:25:10.920045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.481 [2024-10-13 02:25:10.920081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:52.481 BaseBdev2 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.481 BaseBdev3_malloc 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.481 true 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.481 [2024-10-13 02:25:10.965520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:52.481 [2024-10-13 02:25:10.965575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.481 [2024-10-13 02:25:10.965598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:52.481 [2024-10-13 02:25:10.965608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.481 [2024-10-13 02:25:10.968174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.481 [2024-10-13 02:25:10.968214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:52.481 BaseBdev3 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.481 BaseBdev4_malloc 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.481 02:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.481 true 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.481 [2024-10-13 02:25:11.013522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:52.481 [2024-10-13 02:25:11.013647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.481 [2024-10-13 02:25:11.013691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:52.481 [2024-10-13 02:25:11.013720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.481 [2024-10-13 02:25:11.016251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.481 [2024-10-13 02:25:11.016323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:52.481 BaseBdev4 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.481 [2024-10-13 02:25:11.025565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.481 [2024-10-13 02:25:11.027798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.481 [2024-10-13 02:25:11.027903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:52.481 [2024-10-13 02:25:11.027974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:52.481 [2024-10-13 02:25:11.028184] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:52.481 [2024-10-13 02:25:11.028204] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:52.481 [2024-10-13 02:25:11.028482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:52.481 [2024-10-13 02:25:11.028631] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:52.481 [2024-10-13 02:25:11.028645] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:52.481 [2024-10-13 02:25:11.028792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.481 "name": "raid_bdev1", 00:10:52.481 "uuid": "8dfd9734-527b-4259-91a2-e2a4d61b74c9", 00:10:52.481 "strip_size_kb": 64, 00:10:52.481 "state": "online", 00:10:52.481 "raid_level": "raid0", 00:10:52.481 "superblock": true, 00:10:52.481 "num_base_bdevs": 4, 00:10:52.481 "num_base_bdevs_discovered": 4, 00:10:52.481 "num_base_bdevs_operational": 4, 00:10:52.481 "base_bdevs_list": [ 00:10:52.481 { 00:10:52.481 "name": "BaseBdev1", 00:10:52.481 "uuid": "29c944a9-de0b-5157-b147-7c7f83506097", 00:10:52.481 "is_configured": true, 00:10:52.481 "data_offset": 2048, 00:10:52.481 "data_size": 63488 00:10:52.481 }, 00:10:52.481 { 00:10:52.481 "name": "BaseBdev2", 00:10:52.481 "uuid": "e6af39d3-01ce-59fb-88fa-a01c343cfd4a", 00:10:52.481 "is_configured": true, 00:10:52.481 "data_offset": 2048, 00:10:52.481 "data_size": 63488 00:10:52.481 }, 00:10:52.481 { 00:10:52.481 "name": "BaseBdev3", 00:10:52.481 "uuid": "56f2f8d7-aedd-5b3f-aabe-a05902b8f3d8", 00:10:52.481 "is_configured": true, 00:10:52.481 "data_offset": 2048, 00:10:52.481 "data_size": 63488 00:10:52.481 }, 00:10:52.481 { 00:10:52.481 "name": "BaseBdev4", 00:10:52.481 "uuid": "3a199d7a-5bf4-5b98-8079-ae5bf7bf8f7a", 00:10:52.481 "is_configured": true, 00:10:52.481 "data_offset": 2048, 00:10:52.481 "data_size": 63488 00:10:52.481 } 00:10:52.481 ] 00:10:52.481 }' 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.481 02:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.051 02:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:53.051 02:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:53.051 [2024-10-13 02:25:11.577196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.990 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.990 "name": "raid_bdev1", 00:10:53.990 "uuid": "8dfd9734-527b-4259-91a2-e2a4d61b74c9", 00:10:53.990 "strip_size_kb": 64, 00:10:53.990 "state": "online", 00:10:53.990 "raid_level": "raid0", 00:10:53.990 "superblock": true, 00:10:53.990 "num_base_bdevs": 4, 00:10:53.990 "num_base_bdevs_discovered": 4, 00:10:53.990 "num_base_bdevs_operational": 4, 00:10:53.990 "base_bdevs_list": [ 00:10:53.990 { 00:10:53.990 "name": "BaseBdev1", 00:10:53.990 "uuid": "29c944a9-de0b-5157-b147-7c7f83506097", 00:10:53.990 "is_configured": true, 00:10:53.990 "data_offset": 2048, 00:10:53.990 "data_size": 63488 00:10:53.990 }, 00:10:53.990 { 00:10:53.990 "name": "BaseBdev2", 00:10:53.990 "uuid": "e6af39d3-01ce-59fb-88fa-a01c343cfd4a", 00:10:53.990 "is_configured": true, 00:10:53.990 "data_offset": 2048, 00:10:53.990 "data_size": 63488 00:10:53.990 }, 00:10:53.990 { 00:10:53.990 "name": "BaseBdev3", 00:10:53.990 "uuid": "56f2f8d7-aedd-5b3f-aabe-a05902b8f3d8", 00:10:53.990 "is_configured": true, 00:10:53.990 "data_offset": 2048, 00:10:53.990 "data_size": 63488 00:10:53.990 }, 00:10:53.990 { 00:10:53.990 "name": "BaseBdev4", 00:10:53.990 "uuid": "3a199d7a-5bf4-5b98-8079-ae5bf7bf8f7a", 00:10:53.990 "is_configured": true, 00:10:53.990 "data_offset": 2048, 00:10:53.990 "data_size": 63488 00:10:53.990 } 00:10:53.990 ] 00:10:53.990 }' 00:10:53.991 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.991 02:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.560 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:54.560 02:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.560 02:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.560 [2024-10-13 02:25:12.959089] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:54.560 [2024-10-13 02:25:12.959205] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.560 [2024-10-13 02:25:12.962177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.560 [2024-10-13 02:25:12.962255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.560 [2024-10-13 02:25:12.962317] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.560 [2024-10-13 02:25:12.962328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:54.560 { 00:10:54.560 "results": [ 00:10:54.560 { 00:10:54.560 "job": "raid_bdev1", 00:10:54.560 "core_mask": "0x1", 00:10:54.560 "workload": "randrw", 00:10:54.560 "percentage": 50, 00:10:54.560 "status": "finished", 00:10:54.560 "queue_depth": 1, 00:10:54.560 "io_size": 131072, 00:10:54.560 "runtime": 1.382104, 00:10:54.560 "iops": 13307.247500911653, 00:10:54.560 "mibps": 1663.4059376139567, 00:10:54.560 "io_failed": 1, 00:10:54.560 "io_timeout": 0, 00:10:54.560 "avg_latency_us": 105.65168246795997, 00:10:54.560 "min_latency_us": 26.717903930131005, 00:10:54.560 "max_latency_us": 1531.0812227074236 00:10:54.560 } 00:10:54.560 ], 00:10:54.560 "core_count": 1 00:10:54.560 } 00:10:54.560 02:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.560 02:25:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81936 00:10:54.560 02:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 81936 ']' 00:10:54.560 02:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 81936 00:10:54.560 02:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:54.560 02:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:54.560 02:25:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81936 00:10:54.560 02:25:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:54.560 02:25:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:54.560 02:25:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81936' 00:10:54.560 killing process with pid 81936 00:10:54.560 02:25:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 81936 00:10:54.560 [2024-10-13 02:25:13.010390] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:54.560 02:25:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 81936 00:10:54.560 [2024-10-13 02:25:13.081854] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:54.820 02:25:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wE2hTkxQf7 00:10:54.820 02:25:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:54.820 02:25:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:54.820 02:25:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:54.820 02:25:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:54.820 ************************************ 00:10:54.820 END TEST raid_write_error_test 00:10:54.820 ************************************ 00:10:54.820 02:25:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:54.820 02:25:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:54.820 02:25:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:54.820 00:10:54.820 real 0m3.639s 00:10:54.820 user 0m4.396s 00:10:54.820 sys 0m0.725s 00:10:54.820 02:25:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.820 02:25:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.080 02:25:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:55.080 02:25:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:55.080 02:25:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:55.080 02:25:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.080 02:25:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.080 ************************************ 00:10:55.080 START TEST raid_state_function_test 00:10:55.080 ************************************ 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82074 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82074' 00:10:55.080 Process raid pid: 82074 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82074 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82074 ']' 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:55.080 02:25:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.080 [2024-10-13 02:25:13.660402] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:55.080 [2024-10-13 02:25:13.660670] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.341 [2024-10-13 02:25:13.792334] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.341 [2024-10-13 02:25:13.873365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.341 [2024-10-13 02:25:13.956507] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.341 [2024-10-13 02:25:13.956555] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.910 [2024-10-13 02:25:14.559604] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:55.910 [2024-10-13 02:25:14.559669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:55.910 [2024-10-13 02:25:14.559693] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.910 [2024-10-13 02:25:14.559705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.910 [2024-10-13 02:25:14.559711] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:55.910 [2024-10-13 02:25:14.559727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:55.910 [2024-10-13 02:25:14.559733] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:55.910 [2024-10-13 02:25:14.559742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.910 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.170 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.170 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.170 "name": "Existed_Raid", 00:10:56.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.170 "strip_size_kb": 64, 00:10:56.170 "state": "configuring", 00:10:56.170 "raid_level": "concat", 00:10:56.170 "superblock": false, 00:10:56.170 "num_base_bdevs": 4, 00:10:56.170 "num_base_bdevs_discovered": 0, 00:10:56.170 "num_base_bdevs_operational": 4, 00:10:56.170 "base_bdevs_list": [ 00:10:56.170 { 00:10:56.170 "name": "BaseBdev1", 00:10:56.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.170 "is_configured": false, 00:10:56.170 "data_offset": 0, 00:10:56.170 "data_size": 0 00:10:56.170 }, 00:10:56.170 { 00:10:56.170 "name": "BaseBdev2", 00:10:56.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.170 "is_configured": false, 00:10:56.170 "data_offset": 0, 00:10:56.170 "data_size": 0 00:10:56.170 }, 00:10:56.170 { 00:10:56.170 "name": "BaseBdev3", 00:10:56.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.170 "is_configured": false, 00:10:56.170 "data_offset": 0, 00:10:56.170 "data_size": 0 00:10:56.170 }, 00:10:56.170 { 00:10:56.170 "name": "BaseBdev4", 00:10:56.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.170 "is_configured": false, 00:10:56.170 "data_offset": 0, 00:10:56.170 "data_size": 0 00:10:56.170 } 00:10:56.170 ] 00:10:56.170 }' 00:10:56.170 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.171 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.430 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:56.430 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.431 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.431 [2024-10-13 02:25:14.971145] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:56.431 [2024-10-13 02:25:14.971350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:56.431 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.431 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:56.431 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.431 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.431 [2024-10-13 02:25:14.983114] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:56.431 [2024-10-13 02:25:14.983282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:56.431 [2024-10-13 02:25:14.983340] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:56.431 [2024-10-13 02:25:14.983388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:56.431 [2024-10-13 02:25:14.983458] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:56.431 [2024-10-13 02:25:14.983514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:56.431 [2024-10-13 02:25:14.983567] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:56.431 [2024-10-13 02:25:14.983631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:56.431 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.431 02:25:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:56.431 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.431 02:25:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.431 [2024-10-13 02:25:15.006846] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.431 BaseBdev1 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.431 [ 00:10:56.431 { 00:10:56.431 "name": "BaseBdev1", 00:10:56.431 "aliases": [ 00:10:56.431 "7ab6b733-f88e-46be-8e16-dd2b29ef792e" 00:10:56.431 ], 00:10:56.431 "product_name": "Malloc disk", 00:10:56.431 "block_size": 512, 00:10:56.431 "num_blocks": 65536, 00:10:56.431 "uuid": "7ab6b733-f88e-46be-8e16-dd2b29ef792e", 00:10:56.431 "assigned_rate_limits": { 00:10:56.431 "rw_ios_per_sec": 0, 00:10:56.431 "rw_mbytes_per_sec": 0, 00:10:56.431 "r_mbytes_per_sec": 0, 00:10:56.431 "w_mbytes_per_sec": 0 00:10:56.431 }, 00:10:56.431 "claimed": true, 00:10:56.431 "claim_type": "exclusive_write", 00:10:56.431 "zoned": false, 00:10:56.431 "supported_io_types": { 00:10:56.431 "read": true, 00:10:56.431 "write": true, 00:10:56.431 "unmap": true, 00:10:56.431 "flush": true, 00:10:56.431 "reset": true, 00:10:56.431 "nvme_admin": false, 00:10:56.431 "nvme_io": false, 00:10:56.431 "nvme_io_md": false, 00:10:56.431 "write_zeroes": true, 00:10:56.431 "zcopy": true, 00:10:56.431 "get_zone_info": false, 00:10:56.431 "zone_management": false, 00:10:56.431 "zone_append": false, 00:10:56.431 "compare": false, 00:10:56.431 "compare_and_write": false, 00:10:56.431 "abort": true, 00:10:56.431 "seek_hole": false, 00:10:56.431 "seek_data": false, 00:10:56.431 "copy": true, 00:10:56.431 "nvme_iov_md": false 00:10:56.431 }, 00:10:56.431 "memory_domains": [ 00:10:56.431 { 00:10:56.431 "dma_device_id": "system", 00:10:56.431 "dma_device_type": 1 00:10:56.431 }, 00:10:56.431 { 00:10:56.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.431 "dma_device_type": 2 00:10:56.431 } 00:10:56.431 ], 00:10:56.431 "driver_specific": {} 00:10:56.431 } 00:10:56.431 ] 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.431 "name": "Existed_Raid", 00:10:56.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.431 "strip_size_kb": 64, 00:10:56.431 "state": "configuring", 00:10:56.431 "raid_level": "concat", 00:10:56.431 "superblock": false, 00:10:56.431 "num_base_bdevs": 4, 00:10:56.431 "num_base_bdevs_discovered": 1, 00:10:56.431 "num_base_bdevs_operational": 4, 00:10:56.431 "base_bdevs_list": [ 00:10:56.431 { 00:10:56.431 "name": "BaseBdev1", 00:10:56.431 "uuid": "7ab6b733-f88e-46be-8e16-dd2b29ef792e", 00:10:56.431 "is_configured": true, 00:10:56.431 "data_offset": 0, 00:10:56.431 "data_size": 65536 00:10:56.431 }, 00:10:56.431 { 00:10:56.431 "name": "BaseBdev2", 00:10:56.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.431 "is_configured": false, 00:10:56.431 "data_offset": 0, 00:10:56.431 "data_size": 0 00:10:56.431 }, 00:10:56.431 { 00:10:56.431 "name": "BaseBdev3", 00:10:56.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.431 "is_configured": false, 00:10:56.431 "data_offset": 0, 00:10:56.431 "data_size": 0 00:10:56.431 }, 00:10:56.431 { 00:10:56.431 "name": "BaseBdev4", 00:10:56.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.431 "is_configured": false, 00:10:56.431 "data_offset": 0, 00:10:56.431 "data_size": 0 00:10:56.431 } 00:10:56.431 ] 00:10:56.431 }' 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.431 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.001 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.002 [2024-10-13 02:25:15.474074] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:57.002 [2024-10-13 02:25:15.474232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.002 [2024-10-13 02:25:15.482117] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.002 [2024-10-13 02:25:15.484091] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:57.002 [2024-10-13 02:25:15.484183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:57.002 [2024-10-13 02:25:15.484219] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:57.002 [2024-10-13 02:25:15.484247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:57.002 [2024-10-13 02:25:15.484312] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:57.002 [2024-10-13 02:25:15.484340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.002 "name": "Existed_Raid", 00:10:57.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.002 "strip_size_kb": 64, 00:10:57.002 "state": "configuring", 00:10:57.002 "raid_level": "concat", 00:10:57.002 "superblock": false, 00:10:57.002 "num_base_bdevs": 4, 00:10:57.002 "num_base_bdevs_discovered": 1, 00:10:57.002 "num_base_bdevs_operational": 4, 00:10:57.002 "base_bdevs_list": [ 00:10:57.002 { 00:10:57.002 "name": "BaseBdev1", 00:10:57.002 "uuid": "7ab6b733-f88e-46be-8e16-dd2b29ef792e", 00:10:57.002 "is_configured": true, 00:10:57.002 "data_offset": 0, 00:10:57.002 "data_size": 65536 00:10:57.002 }, 00:10:57.002 { 00:10:57.002 "name": "BaseBdev2", 00:10:57.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.002 "is_configured": false, 00:10:57.002 "data_offset": 0, 00:10:57.002 "data_size": 0 00:10:57.002 }, 00:10:57.002 { 00:10:57.002 "name": "BaseBdev3", 00:10:57.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.002 "is_configured": false, 00:10:57.002 "data_offset": 0, 00:10:57.002 "data_size": 0 00:10:57.002 }, 00:10:57.002 { 00:10:57.002 "name": "BaseBdev4", 00:10:57.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.002 "is_configured": false, 00:10:57.002 "data_offset": 0, 00:10:57.002 "data_size": 0 00:10:57.002 } 00:10:57.002 ] 00:10:57.002 }' 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.002 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.262 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:57.262 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.262 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.524 [2024-10-13 02:25:15.959512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.524 BaseBdev2 00:10:57.524 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.524 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:57.524 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:57.524 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.524 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:57.524 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.524 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.524 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.524 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.524 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.524 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.524 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:57.524 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.524 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.524 [ 00:10:57.524 { 00:10:57.524 "name": "BaseBdev2", 00:10:57.524 "aliases": [ 00:10:57.524 "477255e4-b576-40c5-babc-50158994be6c" 00:10:57.524 ], 00:10:57.524 "product_name": "Malloc disk", 00:10:57.524 "block_size": 512, 00:10:57.524 "num_blocks": 65536, 00:10:57.524 "uuid": "477255e4-b576-40c5-babc-50158994be6c", 00:10:57.524 "assigned_rate_limits": { 00:10:57.524 "rw_ios_per_sec": 0, 00:10:57.524 "rw_mbytes_per_sec": 0, 00:10:57.524 "r_mbytes_per_sec": 0, 00:10:57.524 "w_mbytes_per_sec": 0 00:10:57.524 }, 00:10:57.524 "claimed": true, 00:10:57.524 "claim_type": "exclusive_write", 00:10:57.524 "zoned": false, 00:10:57.524 "supported_io_types": { 00:10:57.524 "read": true, 00:10:57.524 "write": true, 00:10:57.524 "unmap": true, 00:10:57.524 "flush": true, 00:10:57.524 "reset": true, 00:10:57.524 "nvme_admin": false, 00:10:57.524 "nvme_io": false, 00:10:57.524 "nvme_io_md": false, 00:10:57.524 "write_zeroes": true, 00:10:57.524 "zcopy": true, 00:10:57.524 "get_zone_info": false, 00:10:57.524 "zone_management": false, 00:10:57.524 "zone_append": false, 00:10:57.524 "compare": false, 00:10:57.524 "compare_and_write": false, 00:10:57.524 "abort": true, 00:10:57.524 "seek_hole": false, 00:10:57.524 "seek_data": false, 00:10:57.524 "copy": true, 00:10:57.524 "nvme_iov_md": false 00:10:57.524 }, 00:10:57.524 "memory_domains": [ 00:10:57.524 { 00:10:57.524 "dma_device_id": "system", 00:10:57.524 "dma_device_type": 1 00:10:57.524 }, 00:10:57.524 { 00:10:57.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.524 "dma_device_type": 2 00:10:57.524 } 00:10:57.524 ], 00:10:57.524 "driver_specific": {} 00:10:57.524 } 00:10:57.524 ] 00:10:57.524 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.524 02:25:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:57.524 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:57.525 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:57.525 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:57.525 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.525 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.525 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.525 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.525 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.525 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.525 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.525 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.525 02:25:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.525 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.525 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.525 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.525 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.525 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.525 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.525 "name": "Existed_Raid", 00:10:57.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.525 "strip_size_kb": 64, 00:10:57.525 "state": "configuring", 00:10:57.525 "raid_level": "concat", 00:10:57.525 "superblock": false, 00:10:57.525 "num_base_bdevs": 4, 00:10:57.525 "num_base_bdevs_discovered": 2, 00:10:57.525 "num_base_bdevs_operational": 4, 00:10:57.525 "base_bdevs_list": [ 00:10:57.525 { 00:10:57.525 "name": "BaseBdev1", 00:10:57.525 "uuid": "7ab6b733-f88e-46be-8e16-dd2b29ef792e", 00:10:57.525 "is_configured": true, 00:10:57.525 "data_offset": 0, 00:10:57.525 "data_size": 65536 00:10:57.525 }, 00:10:57.525 { 00:10:57.525 "name": "BaseBdev2", 00:10:57.525 "uuid": "477255e4-b576-40c5-babc-50158994be6c", 00:10:57.525 "is_configured": true, 00:10:57.525 "data_offset": 0, 00:10:57.525 "data_size": 65536 00:10:57.525 }, 00:10:57.525 { 00:10:57.525 "name": "BaseBdev3", 00:10:57.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.525 "is_configured": false, 00:10:57.525 "data_offset": 0, 00:10:57.525 "data_size": 0 00:10:57.525 }, 00:10:57.525 { 00:10:57.525 "name": "BaseBdev4", 00:10:57.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.525 "is_configured": false, 00:10:57.525 "data_offset": 0, 00:10:57.525 "data_size": 0 00:10:57.525 } 00:10:57.525 ] 00:10:57.525 }' 00:10:57.525 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.525 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.813 [2024-10-13 02:25:16.434355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.813 BaseBdev3 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.813 [ 00:10:57.813 { 00:10:57.813 "name": "BaseBdev3", 00:10:57.813 "aliases": [ 00:10:57.813 "fb52a6d3-aeed-4d62-925e-befeb953518a" 00:10:57.813 ], 00:10:57.813 "product_name": "Malloc disk", 00:10:57.813 "block_size": 512, 00:10:57.813 "num_blocks": 65536, 00:10:57.813 "uuid": "fb52a6d3-aeed-4d62-925e-befeb953518a", 00:10:57.813 "assigned_rate_limits": { 00:10:57.813 "rw_ios_per_sec": 0, 00:10:57.813 "rw_mbytes_per_sec": 0, 00:10:57.813 "r_mbytes_per_sec": 0, 00:10:57.813 "w_mbytes_per_sec": 0 00:10:57.813 }, 00:10:57.813 "claimed": true, 00:10:57.813 "claim_type": "exclusive_write", 00:10:57.813 "zoned": false, 00:10:57.813 "supported_io_types": { 00:10:57.813 "read": true, 00:10:57.813 "write": true, 00:10:57.813 "unmap": true, 00:10:57.813 "flush": true, 00:10:57.813 "reset": true, 00:10:57.813 "nvme_admin": false, 00:10:57.813 "nvme_io": false, 00:10:57.813 "nvme_io_md": false, 00:10:57.813 "write_zeroes": true, 00:10:57.813 "zcopy": true, 00:10:57.813 "get_zone_info": false, 00:10:57.813 "zone_management": false, 00:10:57.813 "zone_append": false, 00:10:57.813 "compare": false, 00:10:57.813 "compare_and_write": false, 00:10:57.813 "abort": true, 00:10:57.813 "seek_hole": false, 00:10:57.813 "seek_data": false, 00:10:57.813 "copy": true, 00:10:57.813 "nvme_iov_md": false 00:10:57.813 }, 00:10:57.813 "memory_domains": [ 00:10:57.813 { 00:10:57.813 "dma_device_id": "system", 00:10:57.813 "dma_device_type": 1 00:10:57.813 }, 00:10:57.813 { 00:10:57.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.813 "dma_device_type": 2 00:10:57.813 } 00:10:57.813 ], 00:10:57.813 "driver_specific": {} 00:10:57.813 } 00:10:57.813 ] 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.813 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.082 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.082 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.082 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.082 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.082 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.082 "name": "Existed_Raid", 00:10:58.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.082 "strip_size_kb": 64, 00:10:58.082 "state": "configuring", 00:10:58.082 "raid_level": "concat", 00:10:58.082 "superblock": false, 00:10:58.082 "num_base_bdevs": 4, 00:10:58.082 "num_base_bdevs_discovered": 3, 00:10:58.082 "num_base_bdevs_operational": 4, 00:10:58.082 "base_bdevs_list": [ 00:10:58.082 { 00:10:58.082 "name": "BaseBdev1", 00:10:58.082 "uuid": "7ab6b733-f88e-46be-8e16-dd2b29ef792e", 00:10:58.082 "is_configured": true, 00:10:58.082 "data_offset": 0, 00:10:58.082 "data_size": 65536 00:10:58.082 }, 00:10:58.082 { 00:10:58.082 "name": "BaseBdev2", 00:10:58.082 "uuid": "477255e4-b576-40c5-babc-50158994be6c", 00:10:58.082 "is_configured": true, 00:10:58.082 "data_offset": 0, 00:10:58.082 "data_size": 65536 00:10:58.082 }, 00:10:58.082 { 00:10:58.082 "name": "BaseBdev3", 00:10:58.082 "uuid": "fb52a6d3-aeed-4d62-925e-befeb953518a", 00:10:58.082 "is_configured": true, 00:10:58.082 "data_offset": 0, 00:10:58.082 "data_size": 65536 00:10:58.082 }, 00:10:58.082 { 00:10:58.082 "name": "BaseBdev4", 00:10:58.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.082 "is_configured": false, 00:10:58.082 "data_offset": 0, 00:10:58.082 "data_size": 0 00:10:58.082 } 00:10:58.082 ] 00:10:58.082 }' 00:10:58.082 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.082 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.343 [2024-10-13 02:25:16.921069] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:58.343 [2024-10-13 02:25:16.921130] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:58.343 [2024-10-13 02:25:16.921140] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:58.343 [2024-10-13 02:25:16.921430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:58.343 [2024-10-13 02:25:16.921577] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:58.343 [2024-10-13 02:25:16.921601] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:58.343 [2024-10-13 02:25:16.921891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.343 BaseBdev4 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.343 [ 00:10:58.343 { 00:10:58.343 "name": "BaseBdev4", 00:10:58.343 "aliases": [ 00:10:58.343 "fdd0d81d-fffe-400a-b4d3-73ed7888a8a1" 00:10:58.343 ], 00:10:58.343 "product_name": "Malloc disk", 00:10:58.343 "block_size": 512, 00:10:58.343 "num_blocks": 65536, 00:10:58.343 "uuid": "fdd0d81d-fffe-400a-b4d3-73ed7888a8a1", 00:10:58.343 "assigned_rate_limits": { 00:10:58.343 "rw_ios_per_sec": 0, 00:10:58.343 "rw_mbytes_per_sec": 0, 00:10:58.343 "r_mbytes_per_sec": 0, 00:10:58.343 "w_mbytes_per_sec": 0 00:10:58.343 }, 00:10:58.343 "claimed": true, 00:10:58.343 "claim_type": "exclusive_write", 00:10:58.343 "zoned": false, 00:10:58.343 "supported_io_types": { 00:10:58.343 "read": true, 00:10:58.343 "write": true, 00:10:58.343 "unmap": true, 00:10:58.343 "flush": true, 00:10:58.343 "reset": true, 00:10:58.343 "nvme_admin": false, 00:10:58.343 "nvme_io": false, 00:10:58.343 "nvme_io_md": false, 00:10:58.343 "write_zeroes": true, 00:10:58.343 "zcopy": true, 00:10:58.343 "get_zone_info": false, 00:10:58.343 "zone_management": false, 00:10:58.343 "zone_append": false, 00:10:58.343 "compare": false, 00:10:58.343 "compare_and_write": false, 00:10:58.343 "abort": true, 00:10:58.343 "seek_hole": false, 00:10:58.343 "seek_data": false, 00:10:58.343 "copy": true, 00:10:58.343 "nvme_iov_md": false 00:10:58.343 }, 00:10:58.343 "memory_domains": [ 00:10:58.343 { 00:10:58.343 "dma_device_id": "system", 00:10:58.343 "dma_device_type": 1 00:10:58.343 }, 00:10:58.343 { 00:10:58.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.343 "dma_device_type": 2 00:10:58.343 } 00:10:58.343 ], 00:10:58.343 "driver_specific": {} 00:10:58.343 } 00:10:58.343 ] 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.343 02:25:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.343 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.343 "name": "Existed_Raid", 00:10:58.343 "uuid": "b8530b5b-6762-4c4f-95c7-6bd40c1480b5", 00:10:58.343 "strip_size_kb": 64, 00:10:58.343 "state": "online", 00:10:58.343 "raid_level": "concat", 00:10:58.343 "superblock": false, 00:10:58.343 "num_base_bdevs": 4, 00:10:58.343 "num_base_bdevs_discovered": 4, 00:10:58.343 "num_base_bdevs_operational": 4, 00:10:58.343 "base_bdevs_list": [ 00:10:58.343 { 00:10:58.343 "name": "BaseBdev1", 00:10:58.343 "uuid": "7ab6b733-f88e-46be-8e16-dd2b29ef792e", 00:10:58.343 "is_configured": true, 00:10:58.343 "data_offset": 0, 00:10:58.343 "data_size": 65536 00:10:58.343 }, 00:10:58.343 { 00:10:58.343 "name": "BaseBdev2", 00:10:58.343 "uuid": "477255e4-b576-40c5-babc-50158994be6c", 00:10:58.343 "is_configured": true, 00:10:58.343 "data_offset": 0, 00:10:58.343 "data_size": 65536 00:10:58.343 }, 00:10:58.343 { 00:10:58.343 "name": "BaseBdev3", 00:10:58.343 "uuid": "fb52a6d3-aeed-4d62-925e-befeb953518a", 00:10:58.343 "is_configured": true, 00:10:58.343 "data_offset": 0, 00:10:58.343 "data_size": 65536 00:10:58.344 }, 00:10:58.344 { 00:10:58.344 "name": "BaseBdev4", 00:10:58.344 "uuid": "fdd0d81d-fffe-400a-b4d3-73ed7888a8a1", 00:10:58.344 "is_configured": true, 00:10:58.344 "data_offset": 0, 00:10:58.344 "data_size": 65536 00:10:58.344 } 00:10:58.344 ] 00:10:58.344 }' 00:10:58.344 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.344 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.913 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:58.913 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:58.913 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:58.913 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:58.913 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:58.913 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:58.913 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:58.913 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:58.913 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.913 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.913 [2024-10-13 02:25:17.368719] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.913 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.913 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:58.913 "name": "Existed_Raid", 00:10:58.913 "aliases": [ 00:10:58.913 "b8530b5b-6762-4c4f-95c7-6bd40c1480b5" 00:10:58.913 ], 00:10:58.913 "product_name": "Raid Volume", 00:10:58.913 "block_size": 512, 00:10:58.913 "num_blocks": 262144, 00:10:58.913 "uuid": "b8530b5b-6762-4c4f-95c7-6bd40c1480b5", 00:10:58.913 "assigned_rate_limits": { 00:10:58.913 "rw_ios_per_sec": 0, 00:10:58.913 "rw_mbytes_per_sec": 0, 00:10:58.913 "r_mbytes_per_sec": 0, 00:10:58.913 "w_mbytes_per_sec": 0 00:10:58.913 }, 00:10:58.913 "claimed": false, 00:10:58.913 "zoned": false, 00:10:58.913 "supported_io_types": { 00:10:58.913 "read": true, 00:10:58.913 "write": true, 00:10:58.913 "unmap": true, 00:10:58.913 "flush": true, 00:10:58.913 "reset": true, 00:10:58.913 "nvme_admin": false, 00:10:58.913 "nvme_io": false, 00:10:58.913 "nvme_io_md": false, 00:10:58.913 "write_zeroes": true, 00:10:58.913 "zcopy": false, 00:10:58.913 "get_zone_info": false, 00:10:58.913 "zone_management": false, 00:10:58.913 "zone_append": false, 00:10:58.913 "compare": false, 00:10:58.913 "compare_and_write": false, 00:10:58.913 "abort": false, 00:10:58.913 "seek_hole": false, 00:10:58.913 "seek_data": false, 00:10:58.913 "copy": false, 00:10:58.913 "nvme_iov_md": false 00:10:58.913 }, 00:10:58.913 "memory_domains": [ 00:10:58.913 { 00:10:58.913 "dma_device_id": "system", 00:10:58.913 "dma_device_type": 1 00:10:58.913 }, 00:10:58.913 { 00:10:58.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.913 "dma_device_type": 2 00:10:58.913 }, 00:10:58.913 { 00:10:58.913 "dma_device_id": "system", 00:10:58.913 "dma_device_type": 1 00:10:58.913 }, 00:10:58.913 { 00:10:58.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.913 "dma_device_type": 2 00:10:58.913 }, 00:10:58.913 { 00:10:58.913 "dma_device_id": "system", 00:10:58.913 "dma_device_type": 1 00:10:58.913 }, 00:10:58.913 { 00:10:58.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.913 "dma_device_type": 2 00:10:58.913 }, 00:10:58.913 { 00:10:58.913 "dma_device_id": "system", 00:10:58.913 "dma_device_type": 1 00:10:58.913 }, 00:10:58.913 { 00:10:58.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.913 "dma_device_type": 2 00:10:58.913 } 00:10:58.913 ], 00:10:58.913 "driver_specific": { 00:10:58.913 "raid": { 00:10:58.913 "uuid": "b8530b5b-6762-4c4f-95c7-6bd40c1480b5", 00:10:58.913 "strip_size_kb": 64, 00:10:58.913 "state": "online", 00:10:58.913 "raid_level": "concat", 00:10:58.913 "superblock": false, 00:10:58.913 "num_base_bdevs": 4, 00:10:58.913 "num_base_bdevs_discovered": 4, 00:10:58.913 "num_base_bdevs_operational": 4, 00:10:58.913 "base_bdevs_list": [ 00:10:58.913 { 00:10:58.913 "name": "BaseBdev1", 00:10:58.913 "uuid": "7ab6b733-f88e-46be-8e16-dd2b29ef792e", 00:10:58.913 "is_configured": true, 00:10:58.913 "data_offset": 0, 00:10:58.913 "data_size": 65536 00:10:58.913 }, 00:10:58.913 { 00:10:58.913 "name": "BaseBdev2", 00:10:58.913 "uuid": "477255e4-b576-40c5-babc-50158994be6c", 00:10:58.913 "is_configured": true, 00:10:58.914 "data_offset": 0, 00:10:58.914 "data_size": 65536 00:10:58.914 }, 00:10:58.914 { 00:10:58.914 "name": "BaseBdev3", 00:10:58.914 "uuid": "fb52a6d3-aeed-4d62-925e-befeb953518a", 00:10:58.914 "is_configured": true, 00:10:58.914 "data_offset": 0, 00:10:58.914 "data_size": 65536 00:10:58.914 }, 00:10:58.914 { 00:10:58.914 "name": "BaseBdev4", 00:10:58.914 "uuid": "fdd0d81d-fffe-400a-b4d3-73ed7888a8a1", 00:10:58.914 "is_configured": true, 00:10:58.914 "data_offset": 0, 00:10:58.914 "data_size": 65536 00:10:58.914 } 00:10:58.914 ] 00:10:58.914 } 00:10:58.914 } 00:10:58.914 }' 00:10:58.914 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:58.914 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:58.914 BaseBdev2 00:10:58.914 BaseBdev3 00:10:58.914 BaseBdev4' 00:10:58.914 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.914 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:58.914 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.914 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:58.914 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.914 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.914 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.914 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.914 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.914 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.914 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.914 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.914 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:58.914 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.914 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.914 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.173 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.173 [2024-10-13 02:25:17.691896] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:59.173 [2024-10-13 02:25:17.691938] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.174 [2024-10-13 02:25:17.692008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.174 "name": "Existed_Raid", 00:10:59.174 "uuid": "b8530b5b-6762-4c4f-95c7-6bd40c1480b5", 00:10:59.174 "strip_size_kb": 64, 00:10:59.174 "state": "offline", 00:10:59.174 "raid_level": "concat", 00:10:59.174 "superblock": false, 00:10:59.174 "num_base_bdevs": 4, 00:10:59.174 "num_base_bdevs_discovered": 3, 00:10:59.174 "num_base_bdevs_operational": 3, 00:10:59.174 "base_bdevs_list": [ 00:10:59.174 { 00:10:59.174 "name": null, 00:10:59.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.174 "is_configured": false, 00:10:59.174 "data_offset": 0, 00:10:59.174 "data_size": 65536 00:10:59.174 }, 00:10:59.174 { 00:10:59.174 "name": "BaseBdev2", 00:10:59.174 "uuid": "477255e4-b576-40c5-babc-50158994be6c", 00:10:59.174 "is_configured": true, 00:10:59.174 "data_offset": 0, 00:10:59.174 "data_size": 65536 00:10:59.174 }, 00:10:59.174 { 00:10:59.174 "name": "BaseBdev3", 00:10:59.174 "uuid": "fb52a6d3-aeed-4d62-925e-befeb953518a", 00:10:59.174 "is_configured": true, 00:10:59.174 "data_offset": 0, 00:10:59.174 "data_size": 65536 00:10:59.174 }, 00:10:59.174 { 00:10:59.174 "name": "BaseBdev4", 00:10:59.174 "uuid": "fdd0d81d-fffe-400a-b4d3-73ed7888a8a1", 00:10:59.174 "is_configured": true, 00:10:59.174 "data_offset": 0, 00:10:59.174 "data_size": 65536 00:10:59.174 } 00:10:59.174 ] 00:10:59.174 }' 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.174 02:25:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 [2024-10-13 02:25:18.191145] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 [2024-10-13 02:25:18.262541] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 [2024-10-13 02:25:18.334320] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:59.744 [2024-10-13 02:25:18.334407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:59.744 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.745 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.745 BaseBdev2 00:10:59.745 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.745 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:59.745 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:59.745 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.745 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:59.745 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.745 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.745 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:59.745 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.745 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.005 [ 00:11:00.005 { 00:11:00.005 "name": "BaseBdev2", 00:11:00.005 "aliases": [ 00:11:00.005 "b2e2a2db-04b4-4bcc-8a25-4630e85b7c26" 00:11:00.005 ], 00:11:00.005 "product_name": "Malloc disk", 00:11:00.005 "block_size": 512, 00:11:00.005 "num_blocks": 65536, 00:11:00.005 "uuid": "b2e2a2db-04b4-4bcc-8a25-4630e85b7c26", 00:11:00.005 "assigned_rate_limits": { 00:11:00.005 "rw_ios_per_sec": 0, 00:11:00.005 "rw_mbytes_per_sec": 0, 00:11:00.005 "r_mbytes_per_sec": 0, 00:11:00.005 "w_mbytes_per_sec": 0 00:11:00.005 }, 00:11:00.005 "claimed": false, 00:11:00.005 "zoned": false, 00:11:00.005 "supported_io_types": { 00:11:00.005 "read": true, 00:11:00.005 "write": true, 00:11:00.005 "unmap": true, 00:11:00.005 "flush": true, 00:11:00.005 "reset": true, 00:11:00.005 "nvme_admin": false, 00:11:00.005 "nvme_io": false, 00:11:00.005 "nvme_io_md": false, 00:11:00.005 "write_zeroes": true, 00:11:00.005 "zcopy": true, 00:11:00.005 "get_zone_info": false, 00:11:00.005 "zone_management": false, 00:11:00.005 "zone_append": false, 00:11:00.005 "compare": false, 00:11:00.005 "compare_and_write": false, 00:11:00.005 "abort": true, 00:11:00.005 "seek_hole": false, 00:11:00.005 "seek_data": false, 00:11:00.005 "copy": true, 00:11:00.005 "nvme_iov_md": false 00:11:00.005 }, 00:11:00.005 "memory_domains": [ 00:11:00.005 { 00:11:00.005 "dma_device_id": "system", 00:11:00.005 "dma_device_type": 1 00:11:00.005 }, 00:11:00.005 { 00:11:00.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.005 "dma_device_type": 2 00:11:00.005 } 00:11:00.005 ], 00:11:00.005 "driver_specific": {} 00:11:00.005 } 00:11:00.005 ] 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.005 BaseBdev3 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.005 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.005 [ 00:11:00.005 { 00:11:00.005 "name": "BaseBdev3", 00:11:00.005 "aliases": [ 00:11:00.005 "2d3b8da6-e51d-4602-9f2f-c4f6a8730f25" 00:11:00.005 ], 00:11:00.005 "product_name": "Malloc disk", 00:11:00.005 "block_size": 512, 00:11:00.005 "num_blocks": 65536, 00:11:00.005 "uuid": "2d3b8da6-e51d-4602-9f2f-c4f6a8730f25", 00:11:00.005 "assigned_rate_limits": { 00:11:00.005 "rw_ios_per_sec": 0, 00:11:00.006 "rw_mbytes_per_sec": 0, 00:11:00.006 "r_mbytes_per_sec": 0, 00:11:00.006 "w_mbytes_per_sec": 0 00:11:00.006 }, 00:11:00.006 "claimed": false, 00:11:00.006 "zoned": false, 00:11:00.006 "supported_io_types": { 00:11:00.006 "read": true, 00:11:00.006 "write": true, 00:11:00.006 "unmap": true, 00:11:00.006 "flush": true, 00:11:00.006 "reset": true, 00:11:00.006 "nvme_admin": false, 00:11:00.006 "nvme_io": false, 00:11:00.006 "nvme_io_md": false, 00:11:00.006 "write_zeroes": true, 00:11:00.006 "zcopy": true, 00:11:00.006 "get_zone_info": false, 00:11:00.006 "zone_management": false, 00:11:00.006 "zone_append": false, 00:11:00.006 "compare": false, 00:11:00.006 "compare_and_write": false, 00:11:00.006 "abort": true, 00:11:00.006 "seek_hole": false, 00:11:00.006 "seek_data": false, 00:11:00.006 "copy": true, 00:11:00.006 "nvme_iov_md": false 00:11:00.006 }, 00:11:00.006 "memory_domains": [ 00:11:00.006 { 00:11:00.006 "dma_device_id": "system", 00:11:00.006 "dma_device_type": 1 00:11:00.006 }, 00:11:00.006 { 00:11:00.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.006 "dma_device_type": 2 00:11:00.006 } 00:11:00.006 ], 00:11:00.006 "driver_specific": {} 00:11:00.006 } 00:11:00.006 ] 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.006 BaseBdev4 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.006 [ 00:11:00.006 { 00:11:00.006 "name": "BaseBdev4", 00:11:00.006 "aliases": [ 00:11:00.006 "9ea0d937-905a-41b3-b8f2-9b7150443098" 00:11:00.006 ], 00:11:00.006 "product_name": "Malloc disk", 00:11:00.006 "block_size": 512, 00:11:00.006 "num_blocks": 65536, 00:11:00.006 "uuid": "9ea0d937-905a-41b3-b8f2-9b7150443098", 00:11:00.006 "assigned_rate_limits": { 00:11:00.006 "rw_ios_per_sec": 0, 00:11:00.006 "rw_mbytes_per_sec": 0, 00:11:00.006 "r_mbytes_per_sec": 0, 00:11:00.006 "w_mbytes_per_sec": 0 00:11:00.006 }, 00:11:00.006 "claimed": false, 00:11:00.006 "zoned": false, 00:11:00.006 "supported_io_types": { 00:11:00.006 "read": true, 00:11:00.006 "write": true, 00:11:00.006 "unmap": true, 00:11:00.006 "flush": true, 00:11:00.006 "reset": true, 00:11:00.006 "nvme_admin": false, 00:11:00.006 "nvme_io": false, 00:11:00.006 "nvme_io_md": false, 00:11:00.006 "write_zeroes": true, 00:11:00.006 "zcopy": true, 00:11:00.006 "get_zone_info": false, 00:11:00.006 "zone_management": false, 00:11:00.006 "zone_append": false, 00:11:00.006 "compare": false, 00:11:00.006 "compare_and_write": false, 00:11:00.006 "abort": true, 00:11:00.006 "seek_hole": false, 00:11:00.006 "seek_data": false, 00:11:00.006 "copy": true, 00:11:00.006 "nvme_iov_md": false 00:11:00.006 }, 00:11:00.006 "memory_domains": [ 00:11:00.006 { 00:11:00.006 "dma_device_id": "system", 00:11:00.006 "dma_device_type": 1 00:11:00.006 }, 00:11:00.006 { 00:11:00.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.006 "dma_device_type": 2 00:11:00.006 } 00:11:00.006 ], 00:11:00.006 "driver_specific": {} 00:11:00.006 } 00:11:00.006 ] 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.006 [2024-10-13 02:25:18.565775] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:00.006 [2024-10-13 02:25:18.565959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:00.006 [2024-10-13 02:25:18.566010] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.006 [2024-10-13 02:25:18.567936] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:00.006 [2024-10-13 02:25:18.568046] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.006 "name": "Existed_Raid", 00:11:00.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.006 "strip_size_kb": 64, 00:11:00.006 "state": "configuring", 00:11:00.006 "raid_level": "concat", 00:11:00.006 "superblock": false, 00:11:00.006 "num_base_bdevs": 4, 00:11:00.006 "num_base_bdevs_discovered": 3, 00:11:00.006 "num_base_bdevs_operational": 4, 00:11:00.006 "base_bdevs_list": [ 00:11:00.006 { 00:11:00.006 "name": "BaseBdev1", 00:11:00.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.006 "is_configured": false, 00:11:00.006 "data_offset": 0, 00:11:00.006 "data_size": 0 00:11:00.006 }, 00:11:00.006 { 00:11:00.006 "name": "BaseBdev2", 00:11:00.006 "uuid": "b2e2a2db-04b4-4bcc-8a25-4630e85b7c26", 00:11:00.006 "is_configured": true, 00:11:00.006 "data_offset": 0, 00:11:00.006 "data_size": 65536 00:11:00.006 }, 00:11:00.006 { 00:11:00.006 "name": "BaseBdev3", 00:11:00.006 "uuid": "2d3b8da6-e51d-4602-9f2f-c4f6a8730f25", 00:11:00.006 "is_configured": true, 00:11:00.006 "data_offset": 0, 00:11:00.006 "data_size": 65536 00:11:00.006 }, 00:11:00.006 { 00:11:00.006 "name": "BaseBdev4", 00:11:00.006 "uuid": "9ea0d937-905a-41b3-b8f2-9b7150443098", 00:11:00.006 "is_configured": true, 00:11:00.006 "data_offset": 0, 00:11:00.006 "data_size": 65536 00:11:00.006 } 00:11:00.006 ] 00:11:00.006 }' 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.006 02:25:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.575 [2024-10-13 02:25:19.033082] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.575 "name": "Existed_Raid", 00:11:00.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.575 "strip_size_kb": 64, 00:11:00.575 "state": "configuring", 00:11:00.575 "raid_level": "concat", 00:11:00.575 "superblock": false, 00:11:00.575 "num_base_bdevs": 4, 00:11:00.575 "num_base_bdevs_discovered": 2, 00:11:00.575 "num_base_bdevs_operational": 4, 00:11:00.575 "base_bdevs_list": [ 00:11:00.575 { 00:11:00.575 "name": "BaseBdev1", 00:11:00.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.575 "is_configured": false, 00:11:00.575 "data_offset": 0, 00:11:00.575 "data_size": 0 00:11:00.575 }, 00:11:00.575 { 00:11:00.575 "name": null, 00:11:00.575 "uuid": "b2e2a2db-04b4-4bcc-8a25-4630e85b7c26", 00:11:00.575 "is_configured": false, 00:11:00.575 "data_offset": 0, 00:11:00.575 "data_size": 65536 00:11:00.575 }, 00:11:00.575 { 00:11:00.575 "name": "BaseBdev3", 00:11:00.575 "uuid": "2d3b8da6-e51d-4602-9f2f-c4f6a8730f25", 00:11:00.575 "is_configured": true, 00:11:00.575 "data_offset": 0, 00:11:00.575 "data_size": 65536 00:11:00.575 }, 00:11:00.575 { 00:11:00.575 "name": "BaseBdev4", 00:11:00.575 "uuid": "9ea0d937-905a-41b3-b8f2-9b7150443098", 00:11:00.575 "is_configured": true, 00:11:00.575 "data_offset": 0, 00:11:00.575 "data_size": 65536 00:11:00.575 } 00:11:00.575 ] 00:11:00.575 }' 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.575 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.145 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.145 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.145 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:01.145 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.145 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.145 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:01.145 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:01.145 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.145 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.146 [2024-10-13 02:25:19.579416] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.146 BaseBdev1 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.146 [ 00:11:01.146 { 00:11:01.146 "name": "BaseBdev1", 00:11:01.146 "aliases": [ 00:11:01.146 "640e80db-4e64-478c-9391-f19c06a7aa55" 00:11:01.146 ], 00:11:01.146 "product_name": "Malloc disk", 00:11:01.146 "block_size": 512, 00:11:01.146 "num_blocks": 65536, 00:11:01.146 "uuid": "640e80db-4e64-478c-9391-f19c06a7aa55", 00:11:01.146 "assigned_rate_limits": { 00:11:01.146 "rw_ios_per_sec": 0, 00:11:01.146 "rw_mbytes_per_sec": 0, 00:11:01.146 "r_mbytes_per_sec": 0, 00:11:01.146 "w_mbytes_per_sec": 0 00:11:01.146 }, 00:11:01.146 "claimed": true, 00:11:01.146 "claim_type": "exclusive_write", 00:11:01.146 "zoned": false, 00:11:01.146 "supported_io_types": { 00:11:01.146 "read": true, 00:11:01.146 "write": true, 00:11:01.146 "unmap": true, 00:11:01.146 "flush": true, 00:11:01.146 "reset": true, 00:11:01.146 "nvme_admin": false, 00:11:01.146 "nvme_io": false, 00:11:01.146 "nvme_io_md": false, 00:11:01.146 "write_zeroes": true, 00:11:01.146 "zcopy": true, 00:11:01.146 "get_zone_info": false, 00:11:01.146 "zone_management": false, 00:11:01.146 "zone_append": false, 00:11:01.146 "compare": false, 00:11:01.146 "compare_and_write": false, 00:11:01.146 "abort": true, 00:11:01.146 "seek_hole": false, 00:11:01.146 "seek_data": false, 00:11:01.146 "copy": true, 00:11:01.146 "nvme_iov_md": false 00:11:01.146 }, 00:11:01.146 "memory_domains": [ 00:11:01.146 { 00:11:01.146 "dma_device_id": "system", 00:11:01.146 "dma_device_type": 1 00:11:01.146 }, 00:11:01.146 { 00:11:01.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.146 "dma_device_type": 2 00:11:01.146 } 00:11:01.146 ], 00:11:01.146 "driver_specific": {} 00:11:01.146 } 00:11:01.146 ] 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.146 "name": "Existed_Raid", 00:11:01.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.146 "strip_size_kb": 64, 00:11:01.146 "state": "configuring", 00:11:01.146 "raid_level": "concat", 00:11:01.146 "superblock": false, 00:11:01.146 "num_base_bdevs": 4, 00:11:01.146 "num_base_bdevs_discovered": 3, 00:11:01.146 "num_base_bdevs_operational": 4, 00:11:01.146 "base_bdevs_list": [ 00:11:01.146 { 00:11:01.146 "name": "BaseBdev1", 00:11:01.146 "uuid": "640e80db-4e64-478c-9391-f19c06a7aa55", 00:11:01.146 "is_configured": true, 00:11:01.146 "data_offset": 0, 00:11:01.146 "data_size": 65536 00:11:01.146 }, 00:11:01.146 { 00:11:01.146 "name": null, 00:11:01.146 "uuid": "b2e2a2db-04b4-4bcc-8a25-4630e85b7c26", 00:11:01.146 "is_configured": false, 00:11:01.146 "data_offset": 0, 00:11:01.146 "data_size": 65536 00:11:01.146 }, 00:11:01.146 { 00:11:01.146 "name": "BaseBdev3", 00:11:01.146 "uuid": "2d3b8da6-e51d-4602-9f2f-c4f6a8730f25", 00:11:01.146 "is_configured": true, 00:11:01.146 "data_offset": 0, 00:11:01.146 "data_size": 65536 00:11:01.146 }, 00:11:01.146 { 00:11:01.146 "name": "BaseBdev4", 00:11:01.146 "uuid": "9ea0d937-905a-41b3-b8f2-9b7150443098", 00:11:01.146 "is_configured": true, 00:11:01.146 "data_offset": 0, 00:11:01.146 "data_size": 65536 00:11:01.146 } 00:11:01.146 ] 00:11:01.146 }' 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.146 02:25:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.406 [2024-10-13 02:25:20.054815] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.406 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.407 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.407 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.407 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.407 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.407 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.407 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.666 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.666 "name": "Existed_Raid", 00:11:01.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.666 "strip_size_kb": 64, 00:11:01.666 "state": "configuring", 00:11:01.666 "raid_level": "concat", 00:11:01.666 "superblock": false, 00:11:01.666 "num_base_bdevs": 4, 00:11:01.666 "num_base_bdevs_discovered": 2, 00:11:01.666 "num_base_bdevs_operational": 4, 00:11:01.666 "base_bdevs_list": [ 00:11:01.666 { 00:11:01.666 "name": "BaseBdev1", 00:11:01.666 "uuid": "640e80db-4e64-478c-9391-f19c06a7aa55", 00:11:01.666 "is_configured": true, 00:11:01.666 "data_offset": 0, 00:11:01.666 "data_size": 65536 00:11:01.666 }, 00:11:01.666 { 00:11:01.666 "name": null, 00:11:01.666 "uuid": "b2e2a2db-04b4-4bcc-8a25-4630e85b7c26", 00:11:01.666 "is_configured": false, 00:11:01.666 "data_offset": 0, 00:11:01.666 "data_size": 65536 00:11:01.666 }, 00:11:01.666 { 00:11:01.666 "name": null, 00:11:01.666 "uuid": "2d3b8da6-e51d-4602-9f2f-c4f6a8730f25", 00:11:01.666 "is_configured": false, 00:11:01.666 "data_offset": 0, 00:11:01.666 "data_size": 65536 00:11:01.666 }, 00:11:01.666 { 00:11:01.666 "name": "BaseBdev4", 00:11:01.666 "uuid": "9ea0d937-905a-41b3-b8f2-9b7150443098", 00:11:01.666 "is_configured": true, 00:11:01.666 "data_offset": 0, 00:11:01.666 "data_size": 65536 00:11:01.666 } 00:11:01.666 ] 00:11:01.666 }' 00:11:01.666 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.666 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.926 [2024-10-13 02:25:20.546092] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.926 "name": "Existed_Raid", 00:11:01.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.926 "strip_size_kb": 64, 00:11:01.926 "state": "configuring", 00:11:01.926 "raid_level": "concat", 00:11:01.926 "superblock": false, 00:11:01.926 "num_base_bdevs": 4, 00:11:01.926 "num_base_bdevs_discovered": 3, 00:11:01.926 "num_base_bdevs_operational": 4, 00:11:01.926 "base_bdevs_list": [ 00:11:01.926 { 00:11:01.926 "name": "BaseBdev1", 00:11:01.926 "uuid": "640e80db-4e64-478c-9391-f19c06a7aa55", 00:11:01.926 "is_configured": true, 00:11:01.926 "data_offset": 0, 00:11:01.926 "data_size": 65536 00:11:01.926 }, 00:11:01.926 { 00:11:01.926 "name": null, 00:11:01.926 "uuid": "b2e2a2db-04b4-4bcc-8a25-4630e85b7c26", 00:11:01.926 "is_configured": false, 00:11:01.926 "data_offset": 0, 00:11:01.926 "data_size": 65536 00:11:01.926 }, 00:11:01.926 { 00:11:01.926 "name": "BaseBdev3", 00:11:01.926 "uuid": "2d3b8da6-e51d-4602-9f2f-c4f6a8730f25", 00:11:01.926 "is_configured": true, 00:11:01.926 "data_offset": 0, 00:11:01.926 "data_size": 65536 00:11:01.926 }, 00:11:01.926 { 00:11:01.926 "name": "BaseBdev4", 00:11:01.926 "uuid": "9ea0d937-905a-41b3-b8f2-9b7150443098", 00:11:01.926 "is_configured": true, 00:11:01.926 "data_offset": 0, 00:11:01.926 "data_size": 65536 00:11:01.926 } 00:11:01.926 ] 00:11:01.926 }' 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.926 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.495 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.495 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.495 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.495 02:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:02.495 02:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.495 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:02.495 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.496 [2024-10-13 02:25:21.033268] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.496 "name": "Existed_Raid", 00:11:02.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.496 "strip_size_kb": 64, 00:11:02.496 "state": "configuring", 00:11:02.496 "raid_level": "concat", 00:11:02.496 "superblock": false, 00:11:02.496 "num_base_bdevs": 4, 00:11:02.496 "num_base_bdevs_discovered": 2, 00:11:02.496 "num_base_bdevs_operational": 4, 00:11:02.496 "base_bdevs_list": [ 00:11:02.496 { 00:11:02.496 "name": null, 00:11:02.496 "uuid": "640e80db-4e64-478c-9391-f19c06a7aa55", 00:11:02.496 "is_configured": false, 00:11:02.496 "data_offset": 0, 00:11:02.496 "data_size": 65536 00:11:02.496 }, 00:11:02.496 { 00:11:02.496 "name": null, 00:11:02.496 "uuid": "b2e2a2db-04b4-4bcc-8a25-4630e85b7c26", 00:11:02.496 "is_configured": false, 00:11:02.496 "data_offset": 0, 00:11:02.496 "data_size": 65536 00:11:02.496 }, 00:11:02.496 { 00:11:02.496 "name": "BaseBdev3", 00:11:02.496 "uuid": "2d3b8da6-e51d-4602-9f2f-c4f6a8730f25", 00:11:02.496 "is_configured": true, 00:11:02.496 "data_offset": 0, 00:11:02.496 "data_size": 65536 00:11:02.496 }, 00:11:02.496 { 00:11:02.496 "name": "BaseBdev4", 00:11:02.496 "uuid": "9ea0d937-905a-41b3-b8f2-9b7150443098", 00:11:02.496 "is_configured": true, 00:11:02.496 "data_offset": 0, 00:11:02.496 "data_size": 65536 00:11:02.496 } 00:11:02.496 ] 00:11:02.496 }' 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.496 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.064 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.064 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.064 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.064 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:03.064 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.064 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:03.064 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:03.064 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.065 [2024-10-13 02:25:21.537197] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.065 "name": "Existed_Raid", 00:11:03.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.065 "strip_size_kb": 64, 00:11:03.065 "state": "configuring", 00:11:03.065 "raid_level": "concat", 00:11:03.065 "superblock": false, 00:11:03.065 "num_base_bdevs": 4, 00:11:03.065 "num_base_bdevs_discovered": 3, 00:11:03.065 "num_base_bdevs_operational": 4, 00:11:03.065 "base_bdevs_list": [ 00:11:03.065 { 00:11:03.065 "name": null, 00:11:03.065 "uuid": "640e80db-4e64-478c-9391-f19c06a7aa55", 00:11:03.065 "is_configured": false, 00:11:03.065 "data_offset": 0, 00:11:03.065 "data_size": 65536 00:11:03.065 }, 00:11:03.065 { 00:11:03.065 "name": "BaseBdev2", 00:11:03.065 "uuid": "b2e2a2db-04b4-4bcc-8a25-4630e85b7c26", 00:11:03.065 "is_configured": true, 00:11:03.065 "data_offset": 0, 00:11:03.065 "data_size": 65536 00:11:03.065 }, 00:11:03.065 { 00:11:03.065 "name": "BaseBdev3", 00:11:03.065 "uuid": "2d3b8da6-e51d-4602-9f2f-c4f6a8730f25", 00:11:03.065 "is_configured": true, 00:11:03.065 "data_offset": 0, 00:11:03.065 "data_size": 65536 00:11:03.065 }, 00:11:03.065 { 00:11:03.065 "name": "BaseBdev4", 00:11:03.065 "uuid": "9ea0d937-905a-41b3-b8f2-9b7150443098", 00:11:03.065 "is_configured": true, 00:11:03.065 "data_offset": 0, 00:11:03.065 "data_size": 65536 00:11:03.065 } 00:11:03.065 ] 00:11:03.065 }' 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.065 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.323 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.323 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:03.323 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.323 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.323 02:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.324 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:03.324 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:03.324 02:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.324 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.324 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 640e80db-4e64-478c-9391-f19c06a7aa55 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.583 [2024-10-13 02:25:22.047342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:03.583 [2024-10-13 02:25:22.047468] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:11:03.583 [2024-10-13 02:25:22.047481] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:03.583 [2024-10-13 02:25:22.047759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:11:03.583 [2024-10-13 02:25:22.047895] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:11:03.583 [2024-10-13 02:25:22.047909] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:11:03.583 [2024-10-13 02:25:22.048090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.583 NewBaseBdev 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.583 [ 00:11:03.583 { 00:11:03.583 "name": "NewBaseBdev", 00:11:03.583 "aliases": [ 00:11:03.583 "640e80db-4e64-478c-9391-f19c06a7aa55" 00:11:03.583 ], 00:11:03.583 "product_name": "Malloc disk", 00:11:03.583 "block_size": 512, 00:11:03.583 "num_blocks": 65536, 00:11:03.583 "uuid": "640e80db-4e64-478c-9391-f19c06a7aa55", 00:11:03.583 "assigned_rate_limits": { 00:11:03.583 "rw_ios_per_sec": 0, 00:11:03.583 "rw_mbytes_per_sec": 0, 00:11:03.583 "r_mbytes_per_sec": 0, 00:11:03.583 "w_mbytes_per_sec": 0 00:11:03.583 }, 00:11:03.583 "claimed": true, 00:11:03.583 "claim_type": "exclusive_write", 00:11:03.583 "zoned": false, 00:11:03.583 "supported_io_types": { 00:11:03.583 "read": true, 00:11:03.583 "write": true, 00:11:03.583 "unmap": true, 00:11:03.583 "flush": true, 00:11:03.583 "reset": true, 00:11:03.583 "nvme_admin": false, 00:11:03.583 "nvme_io": false, 00:11:03.583 "nvme_io_md": false, 00:11:03.583 "write_zeroes": true, 00:11:03.583 "zcopy": true, 00:11:03.583 "get_zone_info": false, 00:11:03.583 "zone_management": false, 00:11:03.583 "zone_append": false, 00:11:03.583 "compare": false, 00:11:03.583 "compare_and_write": false, 00:11:03.583 "abort": true, 00:11:03.583 "seek_hole": false, 00:11:03.583 "seek_data": false, 00:11:03.583 "copy": true, 00:11:03.583 "nvme_iov_md": false 00:11:03.583 }, 00:11:03.583 "memory_domains": [ 00:11:03.583 { 00:11:03.583 "dma_device_id": "system", 00:11:03.583 "dma_device_type": 1 00:11:03.583 }, 00:11:03.583 { 00:11:03.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.583 "dma_device_type": 2 00:11:03.583 } 00:11:03.583 ], 00:11:03.583 "driver_specific": {} 00:11:03.583 } 00:11:03.583 ] 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.583 "name": "Existed_Raid", 00:11:03.583 "uuid": "f5afbc33-ffbb-4f1f-854d-49d6e1a8485d", 00:11:03.583 "strip_size_kb": 64, 00:11:03.583 "state": "online", 00:11:03.583 "raid_level": "concat", 00:11:03.583 "superblock": false, 00:11:03.583 "num_base_bdevs": 4, 00:11:03.583 "num_base_bdevs_discovered": 4, 00:11:03.583 "num_base_bdevs_operational": 4, 00:11:03.583 "base_bdevs_list": [ 00:11:03.583 { 00:11:03.583 "name": "NewBaseBdev", 00:11:03.583 "uuid": "640e80db-4e64-478c-9391-f19c06a7aa55", 00:11:03.583 "is_configured": true, 00:11:03.583 "data_offset": 0, 00:11:03.583 "data_size": 65536 00:11:03.583 }, 00:11:03.583 { 00:11:03.583 "name": "BaseBdev2", 00:11:03.583 "uuid": "b2e2a2db-04b4-4bcc-8a25-4630e85b7c26", 00:11:03.583 "is_configured": true, 00:11:03.583 "data_offset": 0, 00:11:03.583 "data_size": 65536 00:11:03.583 }, 00:11:03.583 { 00:11:03.583 "name": "BaseBdev3", 00:11:03.583 "uuid": "2d3b8da6-e51d-4602-9f2f-c4f6a8730f25", 00:11:03.583 "is_configured": true, 00:11:03.583 "data_offset": 0, 00:11:03.583 "data_size": 65536 00:11:03.583 }, 00:11:03.583 { 00:11:03.583 "name": "BaseBdev4", 00:11:03.583 "uuid": "9ea0d937-905a-41b3-b8f2-9b7150443098", 00:11:03.583 "is_configured": true, 00:11:03.583 "data_offset": 0, 00:11:03.583 "data_size": 65536 00:11:03.583 } 00:11:03.583 ] 00:11:03.583 }' 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.583 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.153 [2024-10-13 02:25:22.575301] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:04.153 "name": "Existed_Raid", 00:11:04.153 "aliases": [ 00:11:04.153 "f5afbc33-ffbb-4f1f-854d-49d6e1a8485d" 00:11:04.153 ], 00:11:04.153 "product_name": "Raid Volume", 00:11:04.153 "block_size": 512, 00:11:04.153 "num_blocks": 262144, 00:11:04.153 "uuid": "f5afbc33-ffbb-4f1f-854d-49d6e1a8485d", 00:11:04.153 "assigned_rate_limits": { 00:11:04.153 "rw_ios_per_sec": 0, 00:11:04.153 "rw_mbytes_per_sec": 0, 00:11:04.153 "r_mbytes_per_sec": 0, 00:11:04.153 "w_mbytes_per_sec": 0 00:11:04.153 }, 00:11:04.153 "claimed": false, 00:11:04.153 "zoned": false, 00:11:04.153 "supported_io_types": { 00:11:04.153 "read": true, 00:11:04.153 "write": true, 00:11:04.153 "unmap": true, 00:11:04.153 "flush": true, 00:11:04.153 "reset": true, 00:11:04.153 "nvme_admin": false, 00:11:04.153 "nvme_io": false, 00:11:04.153 "nvme_io_md": false, 00:11:04.153 "write_zeroes": true, 00:11:04.153 "zcopy": false, 00:11:04.153 "get_zone_info": false, 00:11:04.153 "zone_management": false, 00:11:04.153 "zone_append": false, 00:11:04.153 "compare": false, 00:11:04.153 "compare_and_write": false, 00:11:04.153 "abort": false, 00:11:04.153 "seek_hole": false, 00:11:04.153 "seek_data": false, 00:11:04.153 "copy": false, 00:11:04.153 "nvme_iov_md": false 00:11:04.153 }, 00:11:04.153 "memory_domains": [ 00:11:04.153 { 00:11:04.153 "dma_device_id": "system", 00:11:04.153 "dma_device_type": 1 00:11:04.153 }, 00:11:04.153 { 00:11:04.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.153 "dma_device_type": 2 00:11:04.153 }, 00:11:04.153 { 00:11:04.153 "dma_device_id": "system", 00:11:04.153 "dma_device_type": 1 00:11:04.153 }, 00:11:04.153 { 00:11:04.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.153 "dma_device_type": 2 00:11:04.153 }, 00:11:04.153 { 00:11:04.153 "dma_device_id": "system", 00:11:04.153 "dma_device_type": 1 00:11:04.153 }, 00:11:04.153 { 00:11:04.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.153 "dma_device_type": 2 00:11:04.153 }, 00:11:04.153 { 00:11:04.153 "dma_device_id": "system", 00:11:04.153 "dma_device_type": 1 00:11:04.153 }, 00:11:04.153 { 00:11:04.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.153 "dma_device_type": 2 00:11:04.153 } 00:11:04.153 ], 00:11:04.153 "driver_specific": { 00:11:04.153 "raid": { 00:11:04.153 "uuid": "f5afbc33-ffbb-4f1f-854d-49d6e1a8485d", 00:11:04.153 "strip_size_kb": 64, 00:11:04.153 "state": "online", 00:11:04.153 "raid_level": "concat", 00:11:04.153 "superblock": false, 00:11:04.153 "num_base_bdevs": 4, 00:11:04.153 "num_base_bdevs_discovered": 4, 00:11:04.153 "num_base_bdevs_operational": 4, 00:11:04.153 "base_bdevs_list": [ 00:11:04.153 { 00:11:04.153 "name": "NewBaseBdev", 00:11:04.153 "uuid": "640e80db-4e64-478c-9391-f19c06a7aa55", 00:11:04.153 "is_configured": true, 00:11:04.153 "data_offset": 0, 00:11:04.153 "data_size": 65536 00:11:04.153 }, 00:11:04.153 { 00:11:04.153 "name": "BaseBdev2", 00:11:04.153 "uuid": "b2e2a2db-04b4-4bcc-8a25-4630e85b7c26", 00:11:04.153 "is_configured": true, 00:11:04.153 "data_offset": 0, 00:11:04.153 "data_size": 65536 00:11:04.153 }, 00:11:04.153 { 00:11:04.153 "name": "BaseBdev3", 00:11:04.153 "uuid": "2d3b8da6-e51d-4602-9f2f-c4f6a8730f25", 00:11:04.153 "is_configured": true, 00:11:04.153 "data_offset": 0, 00:11:04.153 "data_size": 65536 00:11:04.153 }, 00:11:04.153 { 00:11:04.153 "name": "BaseBdev4", 00:11:04.153 "uuid": "9ea0d937-905a-41b3-b8f2-9b7150443098", 00:11:04.153 "is_configured": true, 00:11:04.153 "data_offset": 0, 00:11:04.153 "data_size": 65536 00:11:04.153 } 00:11:04.153 ] 00:11:04.153 } 00:11:04.153 } 00:11:04.153 }' 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:04.153 BaseBdev2 00:11:04.153 BaseBdev3 00:11:04.153 BaseBdev4' 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.153 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.413 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.413 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.413 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.413 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:04.413 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.413 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.413 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.413 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.413 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.413 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.413 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:04.413 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.413 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.413 [2024-10-13 02:25:22.894404] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:04.413 [2024-10-13 02:25:22.894516] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.413 [2024-10-13 02:25:22.894615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.413 [2024-10-13 02:25:22.894711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.413 [2024-10-13 02:25:22.894754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:11:04.413 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.414 02:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82074 00:11:04.414 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82074 ']' 00:11:04.414 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82074 00:11:04.414 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:04.414 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:04.414 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82074 00:11:04.414 killing process with pid 82074 00:11:04.414 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:04.414 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:04.414 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82074' 00:11:04.414 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 82074 00:11:04.414 [2024-10-13 02:25:22.942655] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:04.414 02:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 82074 00:11:04.414 [2024-10-13 02:25:22.983372] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:04.673 00:11:04.673 real 0m9.674s 00:11:04.673 user 0m16.350s 00:11:04.673 sys 0m2.110s 00:11:04.673 ************************************ 00:11:04.673 END TEST raid_state_function_test 00:11:04.673 ************************************ 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.673 02:25:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:04.673 02:25:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:04.673 02:25:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.673 02:25:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:04.673 ************************************ 00:11:04.673 START TEST raid_state_function_test_sb 00:11:04.673 ************************************ 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:04.673 Process raid pid: 82724 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82724 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82724' 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82724 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82724 ']' 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.673 02:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.933 [2024-10-13 02:25:23.402502] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:04.933 [2024-10-13 02:25:23.403344] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.933 [2024-10-13 02:25:23.551475] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.933 [2024-10-13 02:25:23.600050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.192 [2024-10-13 02:25:23.642996] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.192 [2024-10-13 02:25:23.643121] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.761 [2024-10-13 02:25:24.276708] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.761 [2024-10-13 02:25:24.276772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.761 [2024-10-13 02:25:24.276785] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.761 [2024-10-13 02:25:24.276796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.761 [2024-10-13 02:25:24.276802] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.761 [2024-10-13 02:25:24.276815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.761 [2024-10-13 02:25:24.276821] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:05.761 [2024-10-13 02:25:24.276830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.761 "name": "Existed_Raid", 00:11:05.761 "uuid": "e3221fc0-4fc5-4004-912f-ac0a3aa32ccf", 00:11:05.761 "strip_size_kb": 64, 00:11:05.761 "state": "configuring", 00:11:05.761 "raid_level": "concat", 00:11:05.761 "superblock": true, 00:11:05.761 "num_base_bdevs": 4, 00:11:05.761 "num_base_bdevs_discovered": 0, 00:11:05.761 "num_base_bdevs_operational": 4, 00:11:05.761 "base_bdevs_list": [ 00:11:05.761 { 00:11:05.761 "name": "BaseBdev1", 00:11:05.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.761 "is_configured": false, 00:11:05.761 "data_offset": 0, 00:11:05.761 "data_size": 0 00:11:05.761 }, 00:11:05.761 { 00:11:05.761 "name": "BaseBdev2", 00:11:05.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.761 "is_configured": false, 00:11:05.761 "data_offset": 0, 00:11:05.761 "data_size": 0 00:11:05.761 }, 00:11:05.761 { 00:11:05.761 "name": "BaseBdev3", 00:11:05.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.761 "is_configured": false, 00:11:05.761 "data_offset": 0, 00:11:05.761 "data_size": 0 00:11:05.761 }, 00:11:05.761 { 00:11:05.761 "name": "BaseBdev4", 00:11:05.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.761 "is_configured": false, 00:11:05.761 "data_offset": 0, 00:11:05.761 "data_size": 0 00:11:05.761 } 00:11:05.761 ] 00:11:05.761 }' 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.761 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.331 [2024-10-13 02:25:24.735664] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:06.331 [2024-10-13 02:25:24.735819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.331 [2024-10-13 02:25:24.747647] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:06.331 [2024-10-13 02:25:24.747752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:06.331 [2024-10-13 02:25:24.747779] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:06.331 [2024-10-13 02:25:24.747801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:06.331 [2024-10-13 02:25:24.747819] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:06.331 [2024-10-13 02:25:24.747839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:06.331 [2024-10-13 02:25:24.747857] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:06.331 [2024-10-13 02:25:24.747891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.331 [2024-10-13 02:25:24.768415] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.331 BaseBdev1 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.331 [ 00:11:06.331 { 00:11:06.331 "name": "BaseBdev1", 00:11:06.331 "aliases": [ 00:11:06.331 "2a5748fe-044d-401e-8c3e-bbf392d60b58" 00:11:06.331 ], 00:11:06.331 "product_name": "Malloc disk", 00:11:06.331 "block_size": 512, 00:11:06.331 "num_blocks": 65536, 00:11:06.331 "uuid": "2a5748fe-044d-401e-8c3e-bbf392d60b58", 00:11:06.331 "assigned_rate_limits": { 00:11:06.331 "rw_ios_per_sec": 0, 00:11:06.331 "rw_mbytes_per_sec": 0, 00:11:06.331 "r_mbytes_per_sec": 0, 00:11:06.331 "w_mbytes_per_sec": 0 00:11:06.331 }, 00:11:06.331 "claimed": true, 00:11:06.331 "claim_type": "exclusive_write", 00:11:06.331 "zoned": false, 00:11:06.331 "supported_io_types": { 00:11:06.331 "read": true, 00:11:06.331 "write": true, 00:11:06.331 "unmap": true, 00:11:06.331 "flush": true, 00:11:06.331 "reset": true, 00:11:06.331 "nvme_admin": false, 00:11:06.331 "nvme_io": false, 00:11:06.331 "nvme_io_md": false, 00:11:06.331 "write_zeroes": true, 00:11:06.331 "zcopy": true, 00:11:06.331 "get_zone_info": false, 00:11:06.331 "zone_management": false, 00:11:06.331 "zone_append": false, 00:11:06.331 "compare": false, 00:11:06.331 "compare_and_write": false, 00:11:06.331 "abort": true, 00:11:06.331 "seek_hole": false, 00:11:06.331 "seek_data": false, 00:11:06.331 "copy": true, 00:11:06.331 "nvme_iov_md": false 00:11:06.331 }, 00:11:06.331 "memory_domains": [ 00:11:06.331 { 00:11:06.331 "dma_device_id": "system", 00:11:06.331 "dma_device_type": 1 00:11:06.331 }, 00:11:06.331 { 00:11:06.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.331 "dma_device_type": 2 00:11:06.331 } 00:11:06.331 ], 00:11:06.331 "driver_specific": {} 00:11:06.331 } 00:11:06.331 ] 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.331 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.331 "name": "Existed_Raid", 00:11:06.331 "uuid": "f93fc491-6cb5-43ea-a9db-04f9f42d81fd", 00:11:06.331 "strip_size_kb": 64, 00:11:06.331 "state": "configuring", 00:11:06.331 "raid_level": "concat", 00:11:06.331 "superblock": true, 00:11:06.331 "num_base_bdevs": 4, 00:11:06.331 "num_base_bdevs_discovered": 1, 00:11:06.331 "num_base_bdevs_operational": 4, 00:11:06.331 "base_bdevs_list": [ 00:11:06.331 { 00:11:06.331 "name": "BaseBdev1", 00:11:06.331 "uuid": "2a5748fe-044d-401e-8c3e-bbf392d60b58", 00:11:06.331 "is_configured": true, 00:11:06.331 "data_offset": 2048, 00:11:06.331 "data_size": 63488 00:11:06.331 }, 00:11:06.331 { 00:11:06.331 "name": "BaseBdev2", 00:11:06.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.331 "is_configured": false, 00:11:06.331 "data_offset": 0, 00:11:06.331 "data_size": 0 00:11:06.331 }, 00:11:06.331 { 00:11:06.332 "name": "BaseBdev3", 00:11:06.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.332 "is_configured": false, 00:11:06.332 "data_offset": 0, 00:11:06.332 "data_size": 0 00:11:06.332 }, 00:11:06.332 { 00:11:06.332 "name": "BaseBdev4", 00:11:06.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.332 "is_configured": false, 00:11:06.332 "data_offset": 0, 00:11:06.332 "data_size": 0 00:11:06.332 } 00:11:06.332 ] 00:11:06.332 }' 00:11:06.332 02:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.332 02:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.591 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:06.591 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.591 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.591 [2024-10-13 02:25:25.247739] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:06.591 [2024-10-13 02:25:25.247883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:11:06.591 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.591 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:06.591 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.591 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.591 [2024-10-13 02:25:25.259754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.591 [2024-10-13 02:25:25.261634] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:06.591 [2024-10-13 02:25:25.261715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:06.591 [2024-10-13 02:25:25.261743] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:06.591 [2024-10-13 02:25:25.261765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:06.591 [2024-10-13 02:25:25.261783] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:06.591 [2024-10-13 02:25:25.261803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:06.591 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.591 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:06.591 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.591 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:06.591 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.592 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.592 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.592 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.592 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.592 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.592 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.592 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.592 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.592 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.592 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.851 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.851 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.851 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.851 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.851 "name": "Existed_Raid", 00:11:06.851 "uuid": "74cba447-180c-4233-89eb-208819b2a3f2", 00:11:06.851 "strip_size_kb": 64, 00:11:06.851 "state": "configuring", 00:11:06.851 "raid_level": "concat", 00:11:06.851 "superblock": true, 00:11:06.851 "num_base_bdevs": 4, 00:11:06.851 "num_base_bdevs_discovered": 1, 00:11:06.851 "num_base_bdevs_operational": 4, 00:11:06.851 "base_bdevs_list": [ 00:11:06.851 { 00:11:06.851 "name": "BaseBdev1", 00:11:06.851 "uuid": "2a5748fe-044d-401e-8c3e-bbf392d60b58", 00:11:06.851 "is_configured": true, 00:11:06.851 "data_offset": 2048, 00:11:06.851 "data_size": 63488 00:11:06.851 }, 00:11:06.851 { 00:11:06.851 "name": "BaseBdev2", 00:11:06.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.851 "is_configured": false, 00:11:06.851 "data_offset": 0, 00:11:06.851 "data_size": 0 00:11:06.851 }, 00:11:06.851 { 00:11:06.851 "name": "BaseBdev3", 00:11:06.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.851 "is_configured": false, 00:11:06.851 "data_offset": 0, 00:11:06.851 "data_size": 0 00:11:06.851 }, 00:11:06.851 { 00:11:06.851 "name": "BaseBdev4", 00:11:06.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.851 "is_configured": false, 00:11:06.851 "data_offset": 0, 00:11:06.851 "data_size": 0 00:11:06.851 } 00:11:06.851 ] 00:11:06.851 }' 00:11:06.851 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.851 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.110 [2024-10-13 02:25:25.706353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:07.110 BaseBdev2 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.110 [ 00:11:07.110 { 00:11:07.110 "name": "BaseBdev2", 00:11:07.110 "aliases": [ 00:11:07.110 "a78a7312-2fa7-4347-b991-1fc4efab4f92" 00:11:07.110 ], 00:11:07.110 "product_name": "Malloc disk", 00:11:07.110 "block_size": 512, 00:11:07.110 "num_blocks": 65536, 00:11:07.110 "uuid": "a78a7312-2fa7-4347-b991-1fc4efab4f92", 00:11:07.110 "assigned_rate_limits": { 00:11:07.110 "rw_ios_per_sec": 0, 00:11:07.110 "rw_mbytes_per_sec": 0, 00:11:07.110 "r_mbytes_per_sec": 0, 00:11:07.110 "w_mbytes_per_sec": 0 00:11:07.110 }, 00:11:07.110 "claimed": true, 00:11:07.110 "claim_type": "exclusive_write", 00:11:07.110 "zoned": false, 00:11:07.110 "supported_io_types": { 00:11:07.110 "read": true, 00:11:07.110 "write": true, 00:11:07.110 "unmap": true, 00:11:07.110 "flush": true, 00:11:07.110 "reset": true, 00:11:07.110 "nvme_admin": false, 00:11:07.110 "nvme_io": false, 00:11:07.110 "nvme_io_md": false, 00:11:07.110 "write_zeroes": true, 00:11:07.110 "zcopy": true, 00:11:07.110 "get_zone_info": false, 00:11:07.110 "zone_management": false, 00:11:07.110 "zone_append": false, 00:11:07.110 "compare": false, 00:11:07.110 "compare_and_write": false, 00:11:07.110 "abort": true, 00:11:07.110 "seek_hole": false, 00:11:07.110 "seek_data": false, 00:11:07.110 "copy": true, 00:11:07.110 "nvme_iov_md": false 00:11:07.110 }, 00:11:07.110 "memory_domains": [ 00:11:07.110 { 00:11:07.110 "dma_device_id": "system", 00:11:07.110 "dma_device_type": 1 00:11:07.110 }, 00:11:07.110 { 00:11:07.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.110 "dma_device_type": 2 00:11:07.110 } 00:11:07.110 ], 00:11:07.110 "driver_specific": {} 00:11:07.110 } 00:11:07.110 ] 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:07.110 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:07.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:07.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.111 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.111 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.111 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.111 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.370 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.370 "name": "Existed_Raid", 00:11:07.370 "uuid": "74cba447-180c-4233-89eb-208819b2a3f2", 00:11:07.370 "strip_size_kb": 64, 00:11:07.370 "state": "configuring", 00:11:07.370 "raid_level": "concat", 00:11:07.370 "superblock": true, 00:11:07.370 "num_base_bdevs": 4, 00:11:07.370 "num_base_bdevs_discovered": 2, 00:11:07.370 "num_base_bdevs_operational": 4, 00:11:07.370 "base_bdevs_list": [ 00:11:07.370 { 00:11:07.370 "name": "BaseBdev1", 00:11:07.370 "uuid": "2a5748fe-044d-401e-8c3e-bbf392d60b58", 00:11:07.370 "is_configured": true, 00:11:07.370 "data_offset": 2048, 00:11:07.370 "data_size": 63488 00:11:07.370 }, 00:11:07.370 { 00:11:07.370 "name": "BaseBdev2", 00:11:07.370 "uuid": "a78a7312-2fa7-4347-b991-1fc4efab4f92", 00:11:07.370 "is_configured": true, 00:11:07.370 "data_offset": 2048, 00:11:07.370 "data_size": 63488 00:11:07.370 }, 00:11:07.370 { 00:11:07.370 "name": "BaseBdev3", 00:11:07.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.370 "is_configured": false, 00:11:07.370 "data_offset": 0, 00:11:07.370 "data_size": 0 00:11:07.370 }, 00:11:07.370 { 00:11:07.370 "name": "BaseBdev4", 00:11:07.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.370 "is_configured": false, 00:11:07.370 "data_offset": 0, 00:11:07.370 "data_size": 0 00:11:07.370 } 00:11:07.370 ] 00:11:07.370 }' 00:11:07.370 02:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.370 02:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.629 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:07.629 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.630 [2024-10-13 02:25:26.172661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.630 BaseBdev3 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.630 [ 00:11:07.630 { 00:11:07.630 "name": "BaseBdev3", 00:11:07.630 "aliases": [ 00:11:07.630 "02b5b991-0519-4b05-a2fe-19d66cb72ddf" 00:11:07.630 ], 00:11:07.630 "product_name": "Malloc disk", 00:11:07.630 "block_size": 512, 00:11:07.630 "num_blocks": 65536, 00:11:07.630 "uuid": "02b5b991-0519-4b05-a2fe-19d66cb72ddf", 00:11:07.630 "assigned_rate_limits": { 00:11:07.630 "rw_ios_per_sec": 0, 00:11:07.630 "rw_mbytes_per_sec": 0, 00:11:07.630 "r_mbytes_per_sec": 0, 00:11:07.630 "w_mbytes_per_sec": 0 00:11:07.630 }, 00:11:07.630 "claimed": true, 00:11:07.630 "claim_type": "exclusive_write", 00:11:07.630 "zoned": false, 00:11:07.630 "supported_io_types": { 00:11:07.630 "read": true, 00:11:07.630 "write": true, 00:11:07.630 "unmap": true, 00:11:07.630 "flush": true, 00:11:07.630 "reset": true, 00:11:07.630 "nvme_admin": false, 00:11:07.630 "nvme_io": false, 00:11:07.630 "nvme_io_md": false, 00:11:07.630 "write_zeroes": true, 00:11:07.630 "zcopy": true, 00:11:07.630 "get_zone_info": false, 00:11:07.630 "zone_management": false, 00:11:07.630 "zone_append": false, 00:11:07.630 "compare": false, 00:11:07.630 "compare_and_write": false, 00:11:07.630 "abort": true, 00:11:07.630 "seek_hole": false, 00:11:07.630 "seek_data": false, 00:11:07.630 "copy": true, 00:11:07.630 "nvme_iov_md": false 00:11:07.630 }, 00:11:07.630 "memory_domains": [ 00:11:07.630 { 00:11:07.630 "dma_device_id": "system", 00:11:07.630 "dma_device_type": 1 00:11:07.630 }, 00:11:07.630 { 00:11:07.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.630 "dma_device_type": 2 00:11:07.630 } 00:11:07.630 ], 00:11:07.630 "driver_specific": {} 00:11:07.630 } 00:11:07.630 ] 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.630 "name": "Existed_Raid", 00:11:07.630 "uuid": "74cba447-180c-4233-89eb-208819b2a3f2", 00:11:07.630 "strip_size_kb": 64, 00:11:07.630 "state": "configuring", 00:11:07.630 "raid_level": "concat", 00:11:07.630 "superblock": true, 00:11:07.630 "num_base_bdevs": 4, 00:11:07.630 "num_base_bdevs_discovered": 3, 00:11:07.630 "num_base_bdevs_operational": 4, 00:11:07.630 "base_bdevs_list": [ 00:11:07.630 { 00:11:07.630 "name": "BaseBdev1", 00:11:07.630 "uuid": "2a5748fe-044d-401e-8c3e-bbf392d60b58", 00:11:07.630 "is_configured": true, 00:11:07.630 "data_offset": 2048, 00:11:07.630 "data_size": 63488 00:11:07.630 }, 00:11:07.630 { 00:11:07.630 "name": "BaseBdev2", 00:11:07.630 "uuid": "a78a7312-2fa7-4347-b991-1fc4efab4f92", 00:11:07.630 "is_configured": true, 00:11:07.630 "data_offset": 2048, 00:11:07.630 "data_size": 63488 00:11:07.630 }, 00:11:07.630 { 00:11:07.630 "name": "BaseBdev3", 00:11:07.630 "uuid": "02b5b991-0519-4b05-a2fe-19d66cb72ddf", 00:11:07.630 "is_configured": true, 00:11:07.630 "data_offset": 2048, 00:11:07.630 "data_size": 63488 00:11:07.630 }, 00:11:07.630 { 00:11:07.630 "name": "BaseBdev4", 00:11:07.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.630 "is_configured": false, 00:11:07.630 "data_offset": 0, 00:11:07.630 "data_size": 0 00:11:07.630 } 00:11:07.630 ] 00:11:07.630 }' 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.630 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.200 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:08.200 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.200 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.200 BaseBdev4 00:11:08.200 [2024-10-13 02:25:26.701335] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:08.200 [2024-10-13 02:25:26.701604] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:11:08.200 [2024-10-13 02:25:26.701632] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:08.200 [2024-10-13 02:25:26.701989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:11:08.200 [2024-10-13 02:25:26.702146] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:11:08.200 [2024-10-13 02:25:26.702168] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:11:08.200 [2024-10-13 02:25:26.702290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.200 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.200 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:08.200 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:08.200 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:08.200 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:08.200 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:08.200 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:08.200 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:08.200 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.200 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.200 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.200 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:08.200 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.200 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.200 [ 00:11:08.200 { 00:11:08.200 "name": "BaseBdev4", 00:11:08.200 "aliases": [ 00:11:08.200 "e59c8105-98cd-4b51-ab20-da9768c9cb59" 00:11:08.200 ], 00:11:08.200 "product_name": "Malloc disk", 00:11:08.200 "block_size": 512, 00:11:08.200 "num_blocks": 65536, 00:11:08.200 "uuid": "e59c8105-98cd-4b51-ab20-da9768c9cb59", 00:11:08.200 "assigned_rate_limits": { 00:11:08.200 "rw_ios_per_sec": 0, 00:11:08.200 "rw_mbytes_per_sec": 0, 00:11:08.200 "r_mbytes_per_sec": 0, 00:11:08.200 "w_mbytes_per_sec": 0 00:11:08.200 }, 00:11:08.200 "claimed": true, 00:11:08.200 "claim_type": "exclusive_write", 00:11:08.200 "zoned": false, 00:11:08.200 "supported_io_types": { 00:11:08.200 "read": true, 00:11:08.200 "write": true, 00:11:08.200 "unmap": true, 00:11:08.200 "flush": true, 00:11:08.200 "reset": true, 00:11:08.201 "nvme_admin": false, 00:11:08.201 "nvme_io": false, 00:11:08.201 "nvme_io_md": false, 00:11:08.201 "write_zeroes": true, 00:11:08.201 "zcopy": true, 00:11:08.201 "get_zone_info": false, 00:11:08.201 "zone_management": false, 00:11:08.201 "zone_append": false, 00:11:08.201 "compare": false, 00:11:08.201 "compare_and_write": false, 00:11:08.201 "abort": true, 00:11:08.201 "seek_hole": false, 00:11:08.201 "seek_data": false, 00:11:08.201 "copy": true, 00:11:08.201 "nvme_iov_md": false 00:11:08.201 }, 00:11:08.201 "memory_domains": [ 00:11:08.201 { 00:11:08.201 "dma_device_id": "system", 00:11:08.201 "dma_device_type": 1 00:11:08.201 }, 00:11:08.201 { 00:11:08.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.201 "dma_device_type": 2 00:11:08.201 } 00:11:08.201 ], 00:11:08.201 "driver_specific": {} 00:11:08.201 } 00:11:08.201 ] 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.201 "name": "Existed_Raid", 00:11:08.201 "uuid": "74cba447-180c-4233-89eb-208819b2a3f2", 00:11:08.201 "strip_size_kb": 64, 00:11:08.201 "state": "online", 00:11:08.201 "raid_level": "concat", 00:11:08.201 "superblock": true, 00:11:08.201 "num_base_bdevs": 4, 00:11:08.201 "num_base_bdevs_discovered": 4, 00:11:08.201 "num_base_bdevs_operational": 4, 00:11:08.201 "base_bdevs_list": [ 00:11:08.201 { 00:11:08.201 "name": "BaseBdev1", 00:11:08.201 "uuid": "2a5748fe-044d-401e-8c3e-bbf392d60b58", 00:11:08.201 "is_configured": true, 00:11:08.201 "data_offset": 2048, 00:11:08.201 "data_size": 63488 00:11:08.201 }, 00:11:08.201 { 00:11:08.201 "name": "BaseBdev2", 00:11:08.201 "uuid": "a78a7312-2fa7-4347-b991-1fc4efab4f92", 00:11:08.201 "is_configured": true, 00:11:08.201 "data_offset": 2048, 00:11:08.201 "data_size": 63488 00:11:08.201 }, 00:11:08.201 { 00:11:08.201 "name": "BaseBdev3", 00:11:08.201 "uuid": "02b5b991-0519-4b05-a2fe-19d66cb72ddf", 00:11:08.201 "is_configured": true, 00:11:08.201 "data_offset": 2048, 00:11:08.201 "data_size": 63488 00:11:08.201 }, 00:11:08.201 { 00:11:08.201 "name": "BaseBdev4", 00:11:08.201 "uuid": "e59c8105-98cd-4b51-ab20-da9768c9cb59", 00:11:08.201 "is_configured": true, 00:11:08.201 "data_offset": 2048, 00:11:08.201 "data_size": 63488 00:11:08.201 } 00:11:08.201 ] 00:11:08.201 }' 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.201 02:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.460 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:08.460 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:08.460 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:08.460 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:08.460 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:08.460 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:08.720 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:08.720 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.720 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:08.720 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.720 [2024-10-13 02:25:27.153077] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.720 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.720 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:08.720 "name": "Existed_Raid", 00:11:08.720 "aliases": [ 00:11:08.720 "74cba447-180c-4233-89eb-208819b2a3f2" 00:11:08.720 ], 00:11:08.720 "product_name": "Raid Volume", 00:11:08.720 "block_size": 512, 00:11:08.720 "num_blocks": 253952, 00:11:08.720 "uuid": "74cba447-180c-4233-89eb-208819b2a3f2", 00:11:08.720 "assigned_rate_limits": { 00:11:08.720 "rw_ios_per_sec": 0, 00:11:08.720 "rw_mbytes_per_sec": 0, 00:11:08.720 "r_mbytes_per_sec": 0, 00:11:08.720 "w_mbytes_per_sec": 0 00:11:08.720 }, 00:11:08.720 "claimed": false, 00:11:08.720 "zoned": false, 00:11:08.720 "supported_io_types": { 00:11:08.720 "read": true, 00:11:08.720 "write": true, 00:11:08.720 "unmap": true, 00:11:08.720 "flush": true, 00:11:08.720 "reset": true, 00:11:08.720 "nvme_admin": false, 00:11:08.720 "nvme_io": false, 00:11:08.720 "nvme_io_md": false, 00:11:08.720 "write_zeroes": true, 00:11:08.720 "zcopy": false, 00:11:08.720 "get_zone_info": false, 00:11:08.720 "zone_management": false, 00:11:08.720 "zone_append": false, 00:11:08.720 "compare": false, 00:11:08.720 "compare_and_write": false, 00:11:08.720 "abort": false, 00:11:08.720 "seek_hole": false, 00:11:08.720 "seek_data": false, 00:11:08.720 "copy": false, 00:11:08.720 "nvme_iov_md": false 00:11:08.720 }, 00:11:08.720 "memory_domains": [ 00:11:08.720 { 00:11:08.720 "dma_device_id": "system", 00:11:08.720 "dma_device_type": 1 00:11:08.720 }, 00:11:08.720 { 00:11:08.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.720 "dma_device_type": 2 00:11:08.720 }, 00:11:08.720 { 00:11:08.720 "dma_device_id": "system", 00:11:08.720 "dma_device_type": 1 00:11:08.720 }, 00:11:08.720 { 00:11:08.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.720 "dma_device_type": 2 00:11:08.720 }, 00:11:08.720 { 00:11:08.721 "dma_device_id": "system", 00:11:08.721 "dma_device_type": 1 00:11:08.721 }, 00:11:08.721 { 00:11:08.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.721 "dma_device_type": 2 00:11:08.721 }, 00:11:08.721 { 00:11:08.721 "dma_device_id": "system", 00:11:08.721 "dma_device_type": 1 00:11:08.721 }, 00:11:08.721 { 00:11:08.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.721 "dma_device_type": 2 00:11:08.721 } 00:11:08.721 ], 00:11:08.721 "driver_specific": { 00:11:08.721 "raid": { 00:11:08.721 "uuid": "74cba447-180c-4233-89eb-208819b2a3f2", 00:11:08.721 "strip_size_kb": 64, 00:11:08.721 "state": "online", 00:11:08.721 "raid_level": "concat", 00:11:08.721 "superblock": true, 00:11:08.721 "num_base_bdevs": 4, 00:11:08.721 "num_base_bdevs_discovered": 4, 00:11:08.721 "num_base_bdevs_operational": 4, 00:11:08.721 "base_bdevs_list": [ 00:11:08.721 { 00:11:08.721 "name": "BaseBdev1", 00:11:08.721 "uuid": "2a5748fe-044d-401e-8c3e-bbf392d60b58", 00:11:08.721 "is_configured": true, 00:11:08.721 "data_offset": 2048, 00:11:08.721 "data_size": 63488 00:11:08.721 }, 00:11:08.721 { 00:11:08.721 "name": "BaseBdev2", 00:11:08.721 "uuid": "a78a7312-2fa7-4347-b991-1fc4efab4f92", 00:11:08.721 "is_configured": true, 00:11:08.721 "data_offset": 2048, 00:11:08.721 "data_size": 63488 00:11:08.721 }, 00:11:08.721 { 00:11:08.721 "name": "BaseBdev3", 00:11:08.721 "uuid": "02b5b991-0519-4b05-a2fe-19d66cb72ddf", 00:11:08.721 "is_configured": true, 00:11:08.721 "data_offset": 2048, 00:11:08.721 "data_size": 63488 00:11:08.721 }, 00:11:08.721 { 00:11:08.721 "name": "BaseBdev4", 00:11:08.721 "uuid": "e59c8105-98cd-4b51-ab20-da9768c9cb59", 00:11:08.721 "is_configured": true, 00:11:08.721 "data_offset": 2048, 00:11:08.721 "data_size": 63488 00:11:08.721 } 00:11:08.721 ] 00:11:08.721 } 00:11:08.721 } 00:11:08.721 }' 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:08.721 BaseBdev2 00:11:08.721 BaseBdev3 00:11:08.721 BaseBdev4' 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.721 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.980 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.980 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.980 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.980 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:08.980 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.980 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.980 [2024-10-13 02:25:27.440189] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.980 [2024-10-13 02:25:27.440267] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.980 [2024-10-13 02:25:27.440349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.980 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.980 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:08.980 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:08.980 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:08.980 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:08.980 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:08.980 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:08.980 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.980 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:08.980 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.980 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.981 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.981 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.981 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.981 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.981 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.981 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.981 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.981 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.981 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.981 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.981 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.981 "name": "Existed_Raid", 00:11:08.981 "uuid": "74cba447-180c-4233-89eb-208819b2a3f2", 00:11:08.981 "strip_size_kb": 64, 00:11:08.981 "state": "offline", 00:11:08.981 "raid_level": "concat", 00:11:08.981 "superblock": true, 00:11:08.981 "num_base_bdevs": 4, 00:11:08.981 "num_base_bdevs_discovered": 3, 00:11:08.981 "num_base_bdevs_operational": 3, 00:11:08.981 "base_bdevs_list": [ 00:11:08.981 { 00:11:08.981 "name": null, 00:11:08.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.981 "is_configured": false, 00:11:08.981 "data_offset": 0, 00:11:08.981 "data_size": 63488 00:11:08.981 }, 00:11:08.981 { 00:11:08.981 "name": "BaseBdev2", 00:11:08.981 "uuid": "a78a7312-2fa7-4347-b991-1fc4efab4f92", 00:11:08.981 "is_configured": true, 00:11:08.981 "data_offset": 2048, 00:11:08.981 "data_size": 63488 00:11:08.981 }, 00:11:08.981 { 00:11:08.981 "name": "BaseBdev3", 00:11:08.981 "uuid": "02b5b991-0519-4b05-a2fe-19d66cb72ddf", 00:11:08.981 "is_configured": true, 00:11:08.981 "data_offset": 2048, 00:11:08.981 "data_size": 63488 00:11:08.981 }, 00:11:08.981 { 00:11:08.981 "name": "BaseBdev4", 00:11:08.981 "uuid": "e59c8105-98cd-4b51-ab20-da9768c9cb59", 00:11:08.981 "is_configured": true, 00:11:08.981 "data_offset": 2048, 00:11:08.981 "data_size": 63488 00:11:08.981 } 00:11:08.981 ] 00:11:08.981 }' 00:11:08.981 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.981 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.548 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:09.548 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:09.548 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:09.548 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.548 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.548 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.548 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.548 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:09.548 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:09.548 02:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:09.548 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.548 02:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.548 [2024-10-13 02:25:27.996777] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.548 [2024-10-13 02:25:28.077843] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.548 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.549 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.549 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:09.549 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:09.549 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:09.549 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.549 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.549 [2024-10-13 02:25:28.158124] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:09.549 [2024-10-13 02:25:28.158239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:11:09.549 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.549 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:09.549 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:09.549 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.549 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.549 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:09.549 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.549 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.809 BaseBdev2 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.809 [ 00:11:09.809 { 00:11:09.809 "name": "BaseBdev2", 00:11:09.809 "aliases": [ 00:11:09.809 "92d7df77-a577-4f6c-883d-e3a5e28bf28d" 00:11:09.809 ], 00:11:09.809 "product_name": "Malloc disk", 00:11:09.809 "block_size": 512, 00:11:09.809 "num_blocks": 65536, 00:11:09.809 "uuid": "92d7df77-a577-4f6c-883d-e3a5e28bf28d", 00:11:09.809 "assigned_rate_limits": { 00:11:09.809 "rw_ios_per_sec": 0, 00:11:09.809 "rw_mbytes_per_sec": 0, 00:11:09.809 "r_mbytes_per_sec": 0, 00:11:09.809 "w_mbytes_per_sec": 0 00:11:09.809 }, 00:11:09.809 "claimed": false, 00:11:09.809 "zoned": false, 00:11:09.809 "supported_io_types": { 00:11:09.809 "read": true, 00:11:09.809 "write": true, 00:11:09.809 "unmap": true, 00:11:09.809 "flush": true, 00:11:09.809 "reset": true, 00:11:09.809 "nvme_admin": false, 00:11:09.809 "nvme_io": false, 00:11:09.809 "nvme_io_md": false, 00:11:09.809 "write_zeroes": true, 00:11:09.809 "zcopy": true, 00:11:09.809 "get_zone_info": false, 00:11:09.809 "zone_management": false, 00:11:09.809 "zone_append": false, 00:11:09.809 "compare": false, 00:11:09.809 "compare_and_write": false, 00:11:09.809 "abort": true, 00:11:09.809 "seek_hole": false, 00:11:09.809 "seek_data": false, 00:11:09.809 "copy": true, 00:11:09.809 "nvme_iov_md": false 00:11:09.809 }, 00:11:09.809 "memory_domains": [ 00:11:09.809 { 00:11:09.809 "dma_device_id": "system", 00:11:09.809 "dma_device_type": 1 00:11:09.809 }, 00:11:09.809 { 00:11:09.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.809 "dma_device_type": 2 00:11:09.809 } 00:11:09.809 ], 00:11:09.809 "driver_specific": {} 00:11:09.809 } 00:11:09.809 ] 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.809 BaseBdev3 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.809 [ 00:11:09.809 { 00:11:09.809 "name": "BaseBdev3", 00:11:09.809 "aliases": [ 00:11:09.809 "107fcc94-8bb1-49b1-8358-b004702101a5" 00:11:09.809 ], 00:11:09.809 "product_name": "Malloc disk", 00:11:09.809 "block_size": 512, 00:11:09.809 "num_blocks": 65536, 00:11:09.809 "uuid": "107fcc94-8bb1-49b1-8358-b004702101a5", 00:11:09.809 "assigned_rate_limits": { 00:11:09.809 "rw_ios_per_sec": 0, 00:11:09.809 "rw_mbytes_per_sec": 0, 00:11:09.809 "r_mbytes_per_sec": 0, 00:11:09.809 "w_mbytes_per_sec": 0 00:11:09.809 }, 00:11:09.809 "claimed": false, 00:11:09.809 "zoned": false, 00:11:09.809 "supported_io_types": { 00:11:09.809 "read": true, 00:11:09.809 "write": true, 00:11:09.809 "unmap": true, 00:11:09.809 "flush": true, 00:11:09.809 "reset": true, 00:11:09.809 "nvme_admin": false, 00:11:09.809 "nvme_io": false, 00:11:09.809 "nvme_io_md": false, 00:11:09.809 "write_zeroes": true, 00:11:09.809 "zcopy": true, 00:11:09.809 "get_zone_info": false, 00:11:09.809 "zone_management": false, 00:11:09.809 "zone_append": false, 00:11:09.809 "compare": false, 00:11:09.809 "compare_and_write": false, 00:11:09.809 "abort": true, 00:11:09.809 "seek_hole": false, 00:11:09.809 "seek_data": false, 00:11:09.809 "copy": true, 00:11:09.809 "nvme_iov_md": false 00:11:09.809 }, 00:11:09.809 "memory_domains": [ 00:11:09.809 { 00:11:09.809 "dma_device_id": "system", 00:11:09.809 "dma_device_type": 1 00:11:09.809 }, 00:11:09.809 { 00:11:09.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.809 "dma_device_type": 2 00:11:09.809 } 00:11:09.809 ], 00:11:09.809 "driver_specific": {} 00:11:09.809 } 00:11:09.809 ] 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.809 BaseBdev4 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.809 [ 00:11:09.809 { 00:11:09.809 "name": "BaseBdev4", 00:11:09.809 "aliases": [ 00:11:09.809 "00d45c38-2783-46ee-80eb-4d0ce1d18f8f" 00:11:09.809 ], 00:11:09.809 "product_name": "Malloc disk", 00:11:09.809 "block_size": 512, 00:11:09.809 "num_blocks": 65536, 00:11:09.809 "uuid": "00d45c38-2783-46ee-80eb-4d0ce1d18f8f", 00:11:09.809 "assigned_rate_limits": { 00:11:09.809 "rw_ios_per_sec": 0, 00:11:09.809 "rw_mbytes_per_sec": 0, 00:11:09.809 "r_mbytes_per_sec": 0, 00:11:09.809 "w_mbytes_per_sec": 0 00:11:09.809 }, 00:11:09.809 "claimed": false, 00:11:09.809 "zoned": false, 00:11:09.809 "supported_io_types": { 00:11:09.809 "read": true, 00:11:09.809 "write": true, 00:11:09.809 "unmap": true, 00:11:09.809 "flush": true, 00:11:09.809 "reset": true, 00:11:09.809 "nvme_admin": false, 00:11:09.809 "nvme_io": false, 00:11:09.809 "nvme_io_md": false, 00:11:09.809 "write_zeroes": true, 00:11:09.809 "zcopy": true, 00:11:09.809 "get_zone_info": false, 00:11:09.809 "zone_management": false, 00:11:09.809 "zone_append": false, 00:11:09.809 "compare": false, 00:11:09.809 "compare_and_write": false, 00:11:09.809 "abort": true, 00:11:09.809 "seek_hole": false, 00:11:09.809 "seek_data": false, 00:11:09.809 "copy": true, 00:11:09.809 "nvme_iov_md": false 00:11:09.809 }, 00:11:09.809 "memory_domains": [ 00:11:09.809 { 00:11:09.809 "dma_device_id": "system", 00:11:09.809 "dma_device_type": 1 00:11:09.809 }, 00:11:09.809 { 00:11:09.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.809 "dma_device_type": 2 00:11:09.809 } 00:11:09.809 ], 00:11:09.809 "driver_specific": {} 00:11:09.809 } 00:11:09.809 ] 00:11:09.809 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.810 [2024-10-13 02:25:28.431594] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.810 [2024-10-13 02:25:28.431724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.810 [2024-10-13 02:25:28.431807] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.810 [2024-10-13 02:25:28.434266] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.810 [2024-10-13 02:25:28.434375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.810 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.083 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.083 "name": "Existed_Raid", 00:11:10.083 "uuid": "5eec8e05-c97d-4822-b4f5-8b684e3ad9a2", 00:11:10.083 "strip_size_kb": 64, 00:11:10.083 "state": "configuring", 00:11:10.083 "raid_level": "concat", 00:11:10.083 "superblock": true, 00:11:10.083 "num_base_bdevs": 4, 00:11:10.083 "num_base_bdevs_discovered": 3, 00:11:10.083 "num_base_bdevs_operational": 4, 00:11:10.083 "base_bdevs_list": [ 00:11:10.083 { 00:11:10.083 "name": "BaseBdev1", 00:11:10.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.083 "is_configured": false, 00:11:10.083 "data_offset": 0, 00:11:10.083 "data_size": 0 00:11:10.083 }, 00:11:10.083 { 00:11:10.083 "name": "BaseBdev2", 00:11:10.083 "uuid": "92d7df77-a577-4f6c-883d-e3a5e28bf28d", 00:11:10.083 "is_configured": true, 00:11:10.083 "data_offset": 2048, 00:11:10.083 "data_size": 63488 00:11:10.083 }, 00:11:10.083 { 00:11:10.083 "name": "BaseBdev3", 00:11:10.083 "uuid": "107fcc94-8bb1-49b1-8358-b004702101a5", 00:11:10.083 "is_configured": true, 00:11:10.083 "data_offset": 2048, 00:11:10.083 "data_size": 63488 00:11:10.083 }, 00:11:10.083 { 00:11:10.083 "name": "BaseBdev4", 00:11:10.083 "uuid": "00d45c38-2783-46ee-80eb-4d0ce1d18f8f", 00:11:10.083 "is_configured": true, 00:11:10.083 "data_offset": 2048, 00:11:10.083 "data_size": 63488 00:11:10.083 } 00:11:10.083 ] 00:11:10.083 }' 00:11:10.083 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.083 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.348 [2024-10-13 02:25:28.906754] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.348 "name": "Existed_Raid", 00:11:10.348 "uuid": "5eec8e05-c97d-4822-b4f5-8b684e3ad9a2", 00:11:10.348 "strip_size_kb": 64, 00:11:10.348 "state": "configuring", 00:11:10.348 "raid_level": "concat", 00:11:10.348 "superblock": true, 00:11:10.348 "num_base_bdevs": 4, 00:11:10.348 "num_base_bdevs_discovered": 2, 00:11:10.348 "num_base_bdevs_operational": 4, 00:11:10.348 "base_bdevs_list": [ 00:11:10.348 { 00:11:10.348 "name": "BaseBdev1", 00:11:10.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.348 "is_configured": false, 00:11:10.348 "data_offset": 0, 00:11:10.348 "data_size": 0 00:11:10.348 }, 00:11:10.348 { 00:11:10.348 "name": null, 00:11:10.348 "uuid": "92d7df77-a577-4f6c-883d-e3a5e28bf28d", 00:11:10.348 "is_configured": false, 00:11:10.348 "data_offset": 0, 00:11:10.348 "data_size": 63488 00:11:10.348 }, 00:11:10.348 { 00:11:10.348 "name": "BaseBdev3", 00:11:10.348 "uuid": "107fcc94-8bb1-49b1-8358-b004702101a5", 00:11:10.348 "is_configured": true, 00:11:10.348 "data_offset": 2048, 00:11:10.348 "data_size": 63488 00:11:10.348 }, 00:11:10.348 { 00:11:10.348 "name": "BaseBdev4", 00:11:10.348 "uuid": "00d45c38-2783-46ee-80eb-4d0ce1d18f8f", 00:11:10.348 "is_configured": true, 00:11:10.348 "data_offset": 2048, 00:11:10.348 "data_size": 63488 00:11:10.348 } 00:11:10.348 ] 00:11:10.348 }' 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.348 02:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.918 [2024-10-13 02:25:29.427505] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.918 BaseBdev1 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.918 [ 00:11:10.918 { 00:11:10.918 "name": "BaseBdev1", 00:11:10.918 "aliases": [ 00:11:10.918 "c98fcc4c-1331-41ea-99f3-a15f68f1e0f3" 00:11:10.918 ], 00:11:10.918 "product_name": "Malloc disk", 00:11:10.918 "block_size": 512, 00:11:10.918 "num_blocks": 65536, 00:11:10.918 "uuid": "c98fcc4c-1331-41ea-99f3-a15f68f1e0f3", 00:11:10.918 "assigned_rate_limits": { 00:11:10.918 "rw_ios_per_sec": 0, 00:11:10.918 "rw_mbytes_per_sec": 0, 00:11:10.918 "r_mbytes_per_sec": 0, 00:11:10.918 "w_mbytes_per_sec": 0 00:11:10.918 }, 00:11:10.918 "claimed": true, 00:11:10.918 "claim_type": "exclusive_write", 00:11:10.918 "zoned": false, 00:11:10.918 "supported_io_types": { 00:11:10.918 "read": true, 00:11:10.918 "write": true, 00:11:10.918 "unmap": true, 00:11:10.918 "flush": true, 00:11:10.918 "reset": true, 00:11:10.918 "nvme_admin": false, 00:11:10.918 "nvme_io": false, 00:11:10.918 "nvme_io_md": false, 00:11:10.918 "write_zeroes": true, 00:11:10.918 "zcopy": true, 00:11:10.918 "get_zone_info": false, 00:11:10.918 "zone_management": false, 00:11:10.918 "zone_append": false, 00:11:10.918 "compare": false, 00:11:10.918 "compare_and_write": false, 00:11:10.918 "abort": true, 00:11:10.918 "seek_hole": false, 00:11:10.918 "seek_data": false, 00:11:10.918 "copy": true, 00:11:10.918 "nvme_iov_md": false 00:11:10.918 }, 00:11:10.918 "memory_domains": [ 00:11:10.918 { 00:11:10.918 "dma_device_id": "system", 00:11:10.918 "dma_device_type": 1 00:11:10.918 }, 00:11:10.918 { 00:11:10.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.918 "dma_device_type": 2 00:11:10.918 } 00:11:10.918 ], 00:11:10.918 "driver_specific": {} 00:11:10.918 } 00:11:10.918 ] 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.918 "name": "Existed_Raid", 00:11:10.918 "uuid": "5eec8e05-c97d-4822-b4f5-8b684e3ad9a2", 00:11:10.918 "strip_size_kb": 64, 00:11:10.918 "state": "configuring", 00:11:10.918 "raid_level": "concat", 00:11:10.918 "superblock": true, 00:11:10.918 "num_base_bdevs": 4, 00:11:10.918 "num_base_bdevs_discovered": 3, 00:11:10.918 "num_base_bdevs_operational": 4, 00:11:10.918 "base_bdevs_list": [ 00:11:10.918 { 00:11:10.918 "name": "BaseBdev1", 00:11:10.918 "uuid": "c98fcc4c-1331-41ea-99f3-a15f68f1e0f3", 00:11:10.918 "is_configured": true, 00:11:10.918 "data_offset": 2048, 00:11:10.918 "data_size": 63488 00:11:10.918 }, 00:11:10.918 { 00:11:10.918 "name": null, 00:11:10.918 "uuid": "92d7df77-a577-4f6c-883d-e3a5e28bf28d", 00:11:10.918 "is_configured": false, 00:11:10.918 "data_offset": 0, 00:11:10.918 "data_size": 63488 00:11:10.918 }, 00:11:10.918 { 00:11:10.918 "name": "BaseBdev3", 00:11:10.918 "uuid": "107fcc94-8bb1-49b1-8358-b004702101a5", 00:11:10.918 "is_configured": true, 00:11:10.918 "data_offset": 2048, 00:11:10.918 "data_size": 63488 00:11:10.918 }, 00:11:10.918 { 00:11:10.918 "name": "BaseBdev4", 00:11:10.918 "uuid": "00d45c38-2783-46ee-80eb-4d0ce1d18f8f", 00:11:10.918 "is_configured": true, 00:11:10.918 "data_offset": 2048, 00:11:10.918 "data_size": 63488 00:11:10.918 } 00:11:10.918 ] 00:11:10.918 }' 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.918 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.487 [2024-10-13 02:25:29.970696] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.487 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.488 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.488 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.488 02:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.488 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.488 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.488 02:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.488 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.488 "name": "Existed_Raid", 00:11:11.488 "uuid": "5eec8e05-c97d-4822-b4f5-8b684e3ad9a2", 00:11:11.488 "strip_size_kb": 64, 00:11:11.488 "state": "configuring", 00:11:11.488 "raid_level": "concat", 00:11:11.488 "superblock": true, 00:11:11.488 "num_base_bdevs": 4, 00:11:11.488 "num_base_bdevs_discovered": 2, 00:11:11.488 "num_base_bdevs_operational": 4, 00:11:11.488 "base_bdevs_list": [ 00:11:11.488 { 00:11:11.488 "name": "BaseBdev1", 00:11:11.488 "uuid": "c98fcc4c-1331-41ea-99f3-a15f68f1e0f3", 00:11:11.488 "is_configured": true, 00:11:11.488 "data_offset": 2048, 00:11:11.488 "data_size": 63488 00:11:11.488 }, 00:11:11.488 { 00:11:11.488 "name": null, 00:11:11.488 "uuid": "92d7df77-a577-4f6c-883d-e3a5e28bf28d", 00:11:11.488 "is_configured": false, 00:11:11.488 "data_offset": 0, 00:11:11.488 "data_size": 63488 00:11:11.488 }, 00:11:11.488 { 00:11:11.488 "name": null, 00:11:11.488 "uuid": "107fcc94-8bb1-49b1-8358-b004702101a5", 00:11:11.488 "is_configured": false, 00:11:11.488 "data_offset": 0, 00:11:11.488 "data_size": 63488 00:11:11.488 }, 00:11:11.488 { 00:11:11.488 "name": "BaseBdev4", 00:11:11.488 "uuid": "00d45c38-2783-46ee-80eb-4d0ce1d18f8f", 00:11:11.488 "is_configured": true, 00:11:11.488 "data_offset": 2048, 00:11:11.488 "data_size": 63488 00:11:11.488 } 00:11:11.488 ] 00:11:11.488 }' 00:11:11.488 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.488 02:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.748 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:11.748 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.748 02:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.748 02:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.748 02:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.748 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:11.748 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:11.748 02:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.748 02:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.008 [2024-10-13 02:25:30.430027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.008 "name": "Existed_Raid", 00:11:12.008 "uuid": "5eec8e05-c97d-4822-b4f5-8b684e3ad9a2", 00:11:12.008 "strip_size_kb": 64, 00:11:12.008 "state": "configuring", 00:11:12.008 "raid_level": "concat", 00:11:12.008 "superblock": true, 00:11:12.008 "num_base_bdevs": 4, 00:11:12.008 "num_base_bdevs_discovered": 3, 00:11:12.008 "num_base_bdevs_operational": 4, 00:11:12.008 "base_bdevs_list": [ 00:11:12.008 { 00:11:12.008 "name": "BaseBdev1", 00:11:12.008 "uuid": "c98fcc4c-1331-41ea-99f3-a15f68f1e0f3", 00:11:12.008 "is_configured": true, 00:11:12.008 "data_offset": 2048, 00:11:12.008 "data_size": 63488 00:11:12.008 }, 00:11:12.008 { 00:11:12.008 "name": null, 00:11:12.008 "uuid": "92d7df77-a577-4f6c-883d-e3a5e28bf28d", 00:11:12.008 "is_configured": false, 00:11:12.008 "data_offset": 0, 00:11:12.008 "data_size": 63488 00:11:12.008 }, 00:11:12.008 { 00:11:12.008 "name": "BaseBdev3", 00:11:12.008 "uuid": "107fcc94-8bb1-49b1-8358-b004702101a5", 00:11:12.008 "is_configured": true, 00:11:12.008 "data_offset": 2048, 00:11:12.008 "data_size": 63488 00:11:12.008 }, 00:11:12.008 { 00:11:12.008 "name": "BaseBdev4", 00:11:12.008 "uuid": "00d45c38-2783-46ee-80eb-4d0ce1d18f8f", 00:11:12.008 "is_configured": true, 00:11:12.008 "data_offset": 2048, 00:11:12.008 "data_size": 63488 00:11:12.008 } 00:11:12.008 ] 00:11:12.008 }' 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.008 02:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.267 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.267 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:12.267 02:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.267 02:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.267 02:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.528 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:12.528 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:12.528 02:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.528 02:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.528 [2024-10-13 02:25:30.973100] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:12.528 02:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.528 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.528 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.528 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.528 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.528 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.528 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.528 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.528 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.528 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.528 02:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.528 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.528 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.528 02:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.528 02:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.528 02:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.528 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.528 "name": "Existed_Raid", 00:11:12.528 "uuid": "5eec8e05-c97d-4822-b4f5-8b684e3ad9a2", 00:11:12.528 "strip_size_kb": 64, 00:11:12.528 "state": "configuring", 00:11:12.528 "raid_level": "concat", 00:11:12.528 "superblock": true, 00:11:12.528 "num_base_bdevs": 4, 00:11:12.528 "num_base_bdevs_discovered": 2, 00:11:12.528 "num_base_bdevs_operational": 4, 00:11:12.528 "base_bdevs_list": [ 00:11:12.528 { 00:11:12.528 "name": null, 00:11:12.528 "uuid": "c98fcc4c-1331-41ea-99f3-a15f68f1e0f3", 00:11:12.529 "is_configured": false, 00:11:12.529 "data_offset": 0, 00:11:12.529 "data_size": 63488 00:11:12.529 }, 00:11:12.529 { 00:11:12.529 "name": null, 00:11:12.529 "uuid": "92d7df77-a577-4f6c-883d-e3a5e28bf28d", 00:11:12.529 "is_configured": false, 00:11:12.529 "data_offset": 0, 00:11:12.529 "data_size": 63488 00:11:12.529 }, 00:11:12.529 { 00:11:12.529 "name": "BaseBdev3", 00:11:12.529 "uuid": "107fcc94-8bb1-49b1-8358-b004702101a5", 00:11:12.529 "is_configured": true, 00:11:12.529 "data_offset": 2048, 00:11:12.529 "data_size": 63488 00:11:12.529 }, 00:11:12.529 { 00:11:12.529 "name": "BaseBdev4", 00:11:12.529 "uuid": "00d45c38-2783-46ee-80eb-4d0ce1d18f8f", 00:11:12.529 "is_configured": true, 00:11:12.529 "data_offset": 2048, 00:11:12.529 "data_size": 63488 00:11:12.529 } 00:11:12.529 ] 00:11:12.529 }' 00:11:12.529 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.529 02:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.790 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.790 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:12.790 02:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.790 02:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.057 02:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.057 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:13.057 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:13.057 02:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.057 02:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.057 [2024-10-13 02:25:31.504831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.057 02:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.057 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:13.057 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.057 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.057 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.057 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.057 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.057 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.057 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.057 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.057 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.058 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.058 02:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.058 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.058 02:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.058 02:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.058 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.058 "name": "Existed_Raid", 00:11:13.058 "uuid": "5eec8e05-c97d-4822-b4f5-8b684e3ad9a2", 00:11:13.058 "strip_size_kb": 64, 00:11:13.058 "state": "configuring", 00:11:13.058 "raid_level": "concat", 00:11:13.058 "superblock": true, 00:11:13.058 "num_base_bdevs": 4, 00:11:13.058 "num_base_bdevs_discovered": 3, 00:11:13.058 "num_base_bdevs_operational": 4, 00:11:13.058 "base_bdevs_list": [ 00:11:13.058 { 00:11:13.058 "name": null, 00:11:13.058 "uuid": "c98fcc4c-1331-41ea-99f3-a15f68f1e0f3", 00:11:13.058 "is_configured": false, 00:11:13.058 "data_offset": 0, 00:11:13.058 "data_size": 63488 00:11:13.058 }, 00:11:13.058 { 00:11:13.058 "name": "BaseBdev2", 00:11:13.058 "uuid": "92d7df77-a577-4f6c-883d-e3a5e28bf28d", 00:11:13.058 "is_configured": true, 00:11:13.058 "data_offset": 2048, 00:11:13.058 "data_size": 63488 00:11:13.058 }, 00:11:13.058 { 00:11:13.058 "name": "BaseBdev3", 00:11:13.058 "uuid": "107fcc94-8bb1-49b1-8358-b004702101a5", 00:11:13.058 "is_configured": true, 00:11:13.058 "data_offset": 2048, 00:11:13.058 "data_size": 63488 00:11:13.058 }, 00:11:13.058 { 00:11:13.058 "name": "BaseBdev4", 00:11:13.058 "uuid": "00d45c38-2783-46ee-80eb-4d0ce1d18f8f", 00:11:13.058 "is_configured": true, 00:11:13.058 "data_offset": 2048, 00:11:13.058 "data_size": 63488 00:11:13.058 } 00:11:13.058 ] 00:11:13.058 }' 00:11:13.058 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.058 02:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.326 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:13.326 02:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.326 02:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.326 02:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.326 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.586 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c98fcc4c-1331-41ea-99f3-a15f68f1e0f3 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.587 NewBaseBdev 00:11:13.587 [2024-10-13 02:25:32.077175] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:13.587 [2024-10-13 02:25:32.077418] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:11:13.587 [2024-10-13 02:25:32.077435] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:13.587 [2024-10-13 02:25:32.077737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:11:13.587 [2024-10-13 02:25:32.077862] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:11:13.587 [2024-10-13 02:25:32.077912] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:11:13.587 [2024-10-13 02:25:32.078030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.587 [ 00:11:13.587 { 00:11:13.587 "name": "NewBaseBdev", 00:11:13.587 "aliases": [ 00:11:13.587 "c98fcc4c-1331-41ea-99f3-a15f68f1e0f3" 00:11:13.587 ], 00:11:13.587 "product_name": "Malloc disk", 00:11:13.587 "block_size": 512, 00:11:13.587 "num_blocks": 65536, 00:11:13.587 "uuid": "c98fcc4c-1331-41ea-99f3-a15f68f1e0f3", 00:11:13.587 "assigned_rate_limits": { 00:11:13.587 "rw_ios_per_sec": 0, 00:11:13.587 "rw_mbytes_per_sec": 0, 00:11:13.587 "r_mbytes_per_sec": 0, 00:11:13.587 "w_mbytes_per_sec": 0 00:11:13.587 }, 00:11:13.587 "claimed": true, 00:11:13.587 "claim_type": "exclusive_write", 00:11:13.587 "zoned": false, 00:11:13.587 "supported_io_types": { 00:11:13.587 "read": true, 00:11:13.587 "write": true, 00:11:13.587 "unmap": true, 00:11:13.587 "flush": true, 00:11:13.587 "reset": true, 00:11:13.587 "nvme_admin": false, 00:11:13.587 "nvme_io": false, 00:11:13.587 "nvme_io_md": false, 00:11:13.587 "write_zeroes": true, 00:11:13.587 "zcopy": true, 00:11:13.587 "get_zone_info": false, 00:11:13.587 "zone_management": false, 00:11:13.587 "zone_append": false, 00:11:13.587 "compare": false, 00:11:13.587 "compare_and_write": false, 00:11:13.587 "abort": true, 00:11:13.587 "seek_hole": false, 00:11:13.587 "seek_data": false, 00:11:13.587 "copy": true, 00:11:13.587 "nvme_iov_md": false 00:11:13.587 }, 00:11:13.587 "memory_domains": [ 00:11:13.587 { 00:11:13.587 "dma_device_id": "system", 00:11:13.587 "dma_device_type": 1 00:11:13.587 }, 00:11:13.587 { 00:11:13.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.587 "dma_device_type": 2 00:11:13.587 } 00:11:13.587 ], 00:11:13.587 "driver_specific": {} 00:11:13.587 } 00:11:13.587 ] 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.587 "name": "Existed_Raid", 00:11:13.587 "uuid": "5eec8e05-c97d-4822-b4f5-8b684e3ad9a2", 00:11:13.587 "strip_size_kb": 64, 00:11:13.587 "state": "online", 00:11:13.587 "raid_level": "concat", 00:11:13.587 "superblock": true, 00:11:13.587 "num_base_bdevs": 4, 00:11:13.587 "num_base_bdevs_discovered": 4, 00:11:13.587 "num_base_bdevs_operational": 4, 00:11:13.587 "base_bdevs_list": [ 00:11:13.587 { 00:11:13.587 "name": "NewBaseBdev", 00:11:13.587 "uuid": "c98fcc4c-1331-41ea-99f3-a15f68f1e0f3", 00:11:13.587 "is_configured": true, 00:11:13.587 "data_offset": 2048, 00:11:13.587 "data_size": 63488 00:11:13.587 }, 00:11:13.587 { 00:11:13.587 "name": "BaseBdev2", 00:11:13.587 "uuid": "92d7df77-a577-4f6c-883d-e3a5e28bf28d", 00:11:13.587 "is_configured": true, 00:11:13.587 "data_offset": 2048, 00:11:13.587 "data_size": 63488 00:11:13.587 }, 00:11:13.587 { 00:11:13.587 "name": "BaseBdev3", 00:11:13.587 "uuid": "107fcc94-8bb1-49b1-8358-b004702101a5", 00:11:13.587 "is_configured": true, 00:11:13.587 "data_offset": 2048, 00:11:13.587 "data_size": 63488 00:11:13.587 }, 00:11:13.587 { 00:11:13.587 "name": "BaseBdev4", 00:11:13.587 "uuid": "00d45c38-2783-46ee-80eb-4d0ce1d18f8f", 00:11:13.587 "is_configured": true, 00:11:13.587 "data_offset": 2048, 00:11:13.587 "data_size": 63488 00:11:13.587 } 00:11:13.587 ] 00:11:13.587 }' 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.587 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.156 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:14.156 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:14.156 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:14.156 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:14.156 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:14.156 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:14.156 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.157 [2024-10-13 02:25:32.544789] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:14.157 "name": "Existed_Raid", 00:11:14.157 "aliases": [ 00:11:14.157 "5eec8e05-c97d-4822-b4f5-8b684e3ad9a2" 00:11:14.157 ], 00:11:14.157 "product_name": "Raid Volume", 00:11:14.157 "block_size": 512, 00:11:14.157 "num_blocks": 253952, 00:11:14.157 "uuid": "5eec8e05-c97d-4822-b4f5-8b684e3ad9a2", 00:11:14.157 "assigned_rate_limits": { 00:11:14.157 "rw_ios_per_sec": 0, 00:11:14.157 "rw_mbytes_per_sec": 0, 00:11:14.157 "r_mbytes_per_sec": 0, 00:11:14.157 "w_mbytes_per_sec": 0 00:11:14.157 }, 00:11:14.157 "claimed": false, 00:11:14.157 "zoned": false, 00:11:14.157 "supported_io_types": { 00:11:14.157 "read": true, 00:11:14.157 "write": true, 00:11:14.157 "unmap": true, 00:11:14.157 "flush": true, 00:11:14.157 "reset": true, 00:11:14.157 "nvme_admin": false, 00:11:14.157 "nvme_io": false, 00:11:14.157 "nvme_io_md": false, 00:11:14.157 "write_zeroes": true, 00:11:14.157 "zcopy": false, 00:11:14.157 "get_zone_info": false, 00:11:14.157 "zone_management": false, 00:11:14.157 "zone_append": false, 00:11:14.157 "compare": false, 00:11:14.157 "compare_and_write": false, 00:11:14.157 "abort": false, 00:11:14.157 "seek_hole": false, 00:11:14.157 "seek_data": false, 00:11:14.157 "copy": false, 00:11:14.157 "nvme_iov_md": false 00:11:14.157 }, 00:11:14.157 "memory_domains": [ 00:11:14.157 { 00:11:14.157 "dma_device_id": "system", 00:11:14.157 "dma_device_type": 1 00:11:14.157 }, 00:11:14.157 { 00:11:14.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.157 "dma_device_type": 2 00:11:14.157 }, 00:11:14.157 { 00:11:14.157 "dma_device_id": "system", 00:11:14.157 "dma_device_type": 1 00:11:14.157 }, 00:11:14.157 { 00:11:14.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.157 "dma_device_type": 2 00:11:14.157 }, 00:11:14.157 { 00:11:14.157 "dma_device_id": "system", 00:11:14.157 "dma_device_type": 1 00:11:14.157 }, 00:11:14.157 { 00:11:14.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.157 "dma_device_type": 2 00:11:14.157 }, 00:11:14.157 { 00:11:14.157 "dma_device_id": "system", 00:11:14.157 "dma_device_type": 1 00:11:14.157 }, 00:11:14.157 { 00:11:14.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.157 "dma_device_type": 2 00:11:14.157 } 00:11:14.157 ], 00:11:14.157 "driver_specific": { 00:11:14.157 "raid": { 00:11:14.157 "uuid": "5eec8e05-c97d-4822-b4f5-8b684e3ad9a2", 00:11:14.157 "strip_size_kb": 64, 00:11:14.157 "state": "online", 00:11:14.157 "raid_level": "concat", 00:11:14.157 "superblock": true, 00:11:14.157 "num_base_bdevs": 4, 00:11:14.157 "num_base_bdevs_discovered": 4, 00:11:14.157 "num_base_bdevs_operational": 4, 00:11:14.157 "base_bdevs_list": [ 00:11:14.157 { 00:11:14.157 "name": "NewBaseBdev", 00:11:14.157 "uuid": "c98fcc4c-1331-41ea-99f3-a15f68f1e0f3", 00:11:14.157 "is_configured": true, 00:11:14.157 "data_offset": 2048, 00:11:14.157 "data_size": 63488 00:11:14.157 }, 00:11:14.157 { 00:11:14.157 "name": "BaseBdev2", 00:11:14.157 "uuid": "92d7df77-a577-4f6c-883d-e3a5e28bf28d", 00:11:14.157 "is_configured": true, 00:11:14.157 "data_offset": 2048, 00:11:14.157 "data_size": 63488 00:11:14.157 }, 00:11:14.157 { 00:11:14.157 "name": "BaseBdev3", 00:11:14.157 "uuid": "107fcc94-8bb1-49b1-8358-b004702101a5", 00:11:14.157 "is_configured": true, 00:11:14.157 "data_offset": 2048, 00:11:14.157 "data_size": 63488 00:11:14.157 }, 00:11:14.157 { 00:11:14.157 "name": "BaseBdev4", 00:11:14.157 "uuid": "00d45c38-2783-46ee-80eb-4d0ce1d18f8f", 00:11:14.157 "is_configured": true, 00:11:14.157 "data_offset": 2048, 00:11:14.157 "data_size": 63488 00:11:14.157 } 00:11:14.157 ] 00:11:14.157 } 00:11:14.157 } 00:11:14.157 }' 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:14.157 BaseBdev2 00:11:14.157 BaseBdev3 00:11:14.157 BaseBdev4' 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.157 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.417 [2024-10-13 02:25:32.899824] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.417 [2024-10-13 02:25:32.899860] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.417 [2024-10-13 02:25:32.899976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.417 [2024-10-13 02:25:32.900057] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.417 [2024-10-13 02:25:32.900074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82724 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82724 ']' 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 82724 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82724 00:11:14.417 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:14.418 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:14.418 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82724' 00:11:14.418 killing process with pid 82724 00:11:14.418 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 82724 00:11:14.418 [2024-10-13 02:25:32.940602] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.418 02:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 82724 00:11:14.418 [2024-10-13 02:25:33.017600] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.987 02:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:14.987 00:11:14.987 real 0m10.080s 00:11:14.987 user 0m16.971s 00:11:14.987 sys 0m2.173s 00:11:14.987 02:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:14.987 02:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.987 ************************************ 00:11:14.987 END TEST raid_state_function_test_sb 00:11:14.987 ************************************ 00:11:14.987 02:25:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:14.987 02:25:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:14.987 02:25:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:14.987 02:25:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:14.987 ************************************ 00:11:14.987 START TEST raid_superblock_test 00:11:14.987 ************************************ 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83378 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83378 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83378 ']' 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.987 02:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.987 [2024-10-13 02:25:33.555166] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:14.987 [2024-10-13 02:25:33.555333] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83378 ] 00:11:15.247 [2024-10-13 02:25:33.700975] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.247 [2024-10-13 02:25:33.774810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.247 [2024-10-13 02:25:33.852661] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.247 [2024-10-13 02:25:33.852703] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.817 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.817 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:15.817 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:15.817 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:15.817 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:15.817 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:15.817 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:15.817 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:15.817 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:15.817 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:15.817 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:15.817 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.817 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.817 malloc1 00:11:15.817 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.817 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:15.817 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.817 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.817 [2024-10-13 02:25:34.416593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:15.818 [2024-10-13 02:25:34.416724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.818 [2024-10-13 02:25:34.416760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:15.818 [2024-10-13 02:25:34.416797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.818 [2024-10-13 02:25:34.419345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.818 [2024-10-13 02:25:34.419421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:15.818 pt1 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.818 malloc2 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.818 [2024-10-13 02:25:34.460107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:15.818 [2024-10-13 02:25:34.460172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.818 [2024-10-13 02:25:34.460191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:15.818 [2024-10-13 02:25:34.460204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.818 [2024-10-13 02:25:34.462974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.818 [2024-10-13 02:25:34.463015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:15.818 pt2 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.818 malloc3 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.818 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.818 [2024-10-13 02:25:34.495043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:15.818 [2024-10-13 02:25:34.495175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.818 [2024-10-13 02:25:34.495217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:15.818 [2024-10-13 02:25:34.495259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.818 [2024-10-13 02:25:34.497827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.818 [2024-10-13 02:25:34.497917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:16.078 pt3 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.078 malloc4 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.078 [2024-10-13 02:25:34.533866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:16.078 [2024-10-13 02:25:34.533986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.078 [2024-10-13 02:25:34.534022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:16.078 [2024-10-13 02:25:34.534062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.078 [2024-10-13 02:25:34.536562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.078 [2024-10-13 02:25:34.536649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:16.078 pt4 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:16.078 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.079 [2024-10-13 02:25:34.545920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:16.079 [2024-10-13 02:25:34.548134] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:16.079 [2024-10-13 02:25:34.548245] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:16.079 [2024-10-13 02:25:34.548312] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:16.079 [2024-10-13 02:25:34.548532] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:16.079 [2024-10-13 02:25:34.548583] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:16.079 [2024-10-13 02:25:34.548881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:11:16.079 [2024-10-13 02:25:34.549068] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:16.079 [2024-10-13 02:25:34.549119] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:16.079 [2024-10-13 02:25:34.549295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.079 "name": "raid_bdev1", 00:11:16.079 "uuid": "8dfc321d-ddab-462c-a46f-ca1040b336c4", 00:11:16.079 "strip_size_kb": 64, 00:11:16.079 "state": "online", 00:11:16.079 "raid_level": "concat", 00:11:16.079 "superblock": true, 00:11:16.079 "num_base_bdevs": 4, 00:11:16.079 "num_base_bdevs_discovered": 4, 00:11:16.079 "num_base_bdevs_operational": 4, 00:11:16.079 "base_bdevs_list": [ 00:11:16.079 { 00:11:16.079 "name": "pt1", 00:11:16.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:16.079 "is_configured": true, 00:11:16.079 "data_offset": 2048, 00:11:16.079 "data_size": 63488 00:11:16.079 }, 00:11:16.079 { 00:11:16.079 "name": "pt2", 00:11:16.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.079 "is_configured": true, 00:11:16.079 "data_offset": 2048, 00:11:16.079 "data_size": 63488 00:11:16.079 }, 00:11:16.079 { 00:11:16.079 "name": "pt3", 00:11:16.079 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.079 "is_configured": true, 00:11:16.079 "data_offset": 2048, 00:11:16.079 "data_size": 63488 00:11:16.079 }, 00:11:16.079 { 00:11:16.079 "name": "pt4", 00:11:16.079 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:16.079 "is_configured": true, 00:11:16.079 "data_offset": 2048, 00:11:16.079 "data_size": 63488 00:11:16.079 } 00:11:16.079 ] 00:11:16.079 }' 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.079 02:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.339 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:16.339 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:16.339 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:16.339 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:16.339 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:16.339 02:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:16.339 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:16.339 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:16.339 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.339 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.339 [2024-10-13 02:25:35.013486] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.598 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.598 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:16.598 "name": "raid_bdev1", 00:11:16.598 "aliases": [ 00:11:16.598 "8dfc321d-ddab-462c-a46f-ca1040b336c4" 00:11:16.598 ], 00:11:16.598 "product_name": "Raid Volume", 00:11:16.598 "block_size": 512, 00:11:16.598 "num_blocks": 253952, 00:11:16.598 "uuid": "8dfc321d-ddab-462c-a46f-ca1040b336c4", 00:11:16.598 "assigned_rate_limits": { 00:11:16.598 "rw_ios_per_sec": 0, 00:11:16.598 "rw_mbytes_per_sec": 0, 00:11:16.598 "r_mbytes_per_sec": 0, 00:11:16.598 "w_mbytes_per_sec": 0 00:11:16.598 }, 00:11:16.598 "claimed": false, 00:11:16.598 "zoned": false, 00:11:16.598 "supported_io_types": { 00:11:16.598 "read": true, 00:11:16.598 "write": true, 00:11:16.598 "unmap": true, 00:11:16.598 "flush": true, 00:11:16.598 "reset": true, 00:11:16.599 "nvme_admin": false, 00:11:16.599 "nvme_io": false, 00:11:16.599 "nvme_io_md": false, 00:11:16.599 "write_zeroes": true, 00:11:16.599 "zcopy": false, 00:11:16.599 "get_zone_info": false, 00:11:16.599 "zone_management": false, 00:11:16.599 "zone_append": false, 00:11:16.599 "compare": false, 00:11:16.599 "compare_and_write": false, 00:11:16.599 "abort": false, 00:11:16.599 "seek_hole": false, 00:11:16.599 "seek_data": false, 00:11:16.599 "copy": false, 00:11:16.599 "nvme_iov_md": false 00:11:16.599 }, 00:11:16.599 "memory_domains": [ 00:11:16.599 { 00:11:16.599 "dma_device_id": "system", 00:11:16.599 "dma_device_type": 1 00:11:16.599 }, 00:11:16.599 { 00:11:16.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.599 "dma_device_type": 2 00:11:16.599 }, 00:11:16.599 { 00:11:16.599 "dma_device_id": "system", 00:11:16.599 "dma_device_type": 1 00:11:16.599 }, 00:11:16.599 { 00:11:16.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.599 "dma_device_type": 2 00:11:16.599 }, 00:11:16.599 { 00:11:16.599 "dma_device_id": "system", 00:11:16.599 "dma_device_type": 1 00:11:16.599 }, 00:11:16.599 { 00:11:16.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.599 "dma_device_type": 2 00:11:16.599 }, 00:11:16.599 { 00:11:16.599 "dma_device_id": "system", 00:11:16.599 "dma_device_type": 1 00:11:16.599 }, 00:11:16.599 { 00:11:16.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.599 "dma_device_type": 2 00:11:16.599 } 00:11:16.599 ], 00:11:16.599 "driver_specific": { 00:11:16.599 "raid": { 00:11:16.599 "uuid": "8dfc321d-ddab-462c-a46f-ca1040b336c4", 00:11:16.599 "strip_size_kb": 64, 00:11:16.599 "state": "online", 00:11:16.599 "raid_level": "concat", 00:11:16.599 "superblock": true, 00:11:16.599 "num_base_bdevs": 4, 00:11:16.599 "num_base_bdevs_discovered": 4, 00:11:16.599 "num_base_bdevs_operational": 4, 00:11:16.599 "base_bdevs_list": [ 00:11:16.599 { 00:11:16.599 "name": "pt1", 00:11:16.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:16.599 "is_configured": true, 00:11:16.599 "data_offset": 2048, 00:11:16.599 "data_size": 63488 00:11:16.599 }, 00:11:16.599 { 00:11:16.599 "name": "pt2", 00:11:16.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.599 "is_configured": true, 00:11:16.599 "data_offset": 2048, 00:11:16.599 "data_size": 63488 00:11:16.599 }, 00:11:16.599 { 00:11:16.599 "name": "pt3", 00:11:16.599 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.599 "is_configured": true, 00:11:16.599 "data_offset": 2048, 00:11:16.599 "data_size": 63488 00:11:16.599 }, 00:11:16.599 { 00:11:16.599 "name": "pt4", 00:11:16.599 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:16.599 "is_configured": true, 00:11:16.599 "data_offset": 2048, 00:11:16.599 "data_size": 63488 00:11:16.599 } 00:11:16.599 ] 00:11:16.599 } 00:11:16.599 } 00:11:16.599 }' 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:16.599 pt2 00:11:16.599 pt3 00:11:16.599 pt4' 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.599 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:16.860 [2024-10-13 02:25:35.312810] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8dfc321d-ddab-462c-a46f-ca1040b336c4 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8dfc321d-ddab-462c-a46f-ca1040b336c4 ']' 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.860 [2024-10-13 02:25:35.360447] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:16.860 [2024-10-13 02:25:35.360483] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.860 [2024-10-13 02:25:35.360587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.860 [2024-10-13 02:25:35.360670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.860 [2024-10-13 02:25:35.360696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.860 [2024-10-13 02:25:35.520200] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:16.860 [2024-10-13 02:25:35.522434] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:16.860 [2024-10-13 02:25:35.522490] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:16.860 [2024-10-13 02:25:35.522527] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:16.860 [2024-10-13 02:25:35.522584] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:16.860 [2024-10-13 02:25:35.522634] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:16.860 [2024-10-13 02:25:35.522654] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:16.860 [2024-10-13 02:25:35.522671] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:16.860 [2024-10-13 02:25:35.522686] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:16.860 [2024-10-13 02:25:35.522696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:11:16.860 request: 00:11:16.860 { 00:11:16.860 "name": "raid_bdev1", 00:11:16.860 "raid_level": "concat", 00:11:16.860 "base_bdevs": [ 00:11:16.860 "malloc1", 00:11:16.860 "malloc2", 00:11:16.860 "malloc3", 00:11:16.860 "malloc4" 00:11:16.860 ], 00:11:16.860 "strip_size_kb": 64, 00:11:16.860 "superblock": false, 00:11:16.860 "method": "bdev_raid_create", 00:11:16.860 "req_id": 1 00:11:16.860 } 00:11:16.860 Got JSON-RPC error response 00:11:16.860 response: 00:11:16.860 { 00:11:16.860 "code": -17, 00:11:16.860 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:16.860 } 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.860 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.121 [2024-10-13 02:25:35.584089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:17.121 [2024-10-13 02:25:35.584203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.121 [2024-10-13 02:25:35.584270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:17.121 [2024-10-13 02:25:35.584304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.121 [2024-10-13 02:25:35.587088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.121 [2024-10-13 02:25:35.587162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:17.121 [2024-10-13 02:25:35.587300] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:17.121 [2024-10-13 02:25:35.587382] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:17.121 pt1 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.121 "name": "raid_bdev1", 00:11:17.121 "uuid": "8dfc321d-ddab-462c-a46f-ca1040b336c4", 00:11:17.121 "strip_size_kb": 64, 00:11:17.121 "state": "configuring", 00:11:17.121 "raid_level": "concat", 00:11:17.121 "superblock": true, 00:11:17.121 "num_base_bdevs": 4, 00:11:17.121 "num_base_bdevs_discovered": 1, 00:11:17.121 "num_base_bdevs_operational": 4, 00:11:17.121 "base_bdevs_list": [ 00:11:17.121 { 00:11:17.121 "name": "pt1", 00:11:17.121 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.121 "is_configured": true, 00:11:17.121 "data_offset": 2048, 00:11:17.121 "data_size": 63488 00:11:17.121 }, 00:11:17.121 { 00:11:17.121 "name": null, 00:11:17.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.121 "is_configured": false, 00:11:17.121 "data_offset": 2048, 00:11:17.121 "data_size": 63488 00:11:17.121 }, 00:11:17.121 { 00:11:17.121 "name": null, 00:11:17.121 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.121 "is_configured": false, 00:11:17.121 "data_offset": 2048, 00:11:17.121 "data_size": 63488 00:11:17.121 }, 00:11:17.121 { 00:11:17.121 "name": null, 00:11:17.121 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.121 "is_configured": false, 00:11:17.121 "data_offset": 2048, 00:11:17.121 "data_size": 63488 00:11:17.121 } 00:11:17.121 ] 00:11:17.121 }' 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.121 02:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.381 [2024-10-13 02:25:36.011385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:17.381 [2024-10-13 02:25:36.011463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.381 [2024-10-13 02:25:36.011489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:17.381 [2024-10-13 02:25:36.011499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.381 [2024-10-13 02:25:36.012028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.381 [2024-10-13 02:25:36.012070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:17.381 [2024-10-13 02:25:36.012176] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:17.381 [2024-10-13 02:25:36.012210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:17.381 pt2 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.381 [2024-10-13 02:25:36.023389] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.381 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.640 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.640 "name": "raid_bdev1", 00:11:17.640 "uuid": "8dfc321d-ddab-462c-a46f-ca1040b336c4", 00:11:17.640 "strip_size_kb": 64, 00:11:17.640 "state": "configuring", 00:11:17.641 "raid_level": "concat", 00:11:17.641 "superblock": true, 00:11:17.641 "num_base_bdevs": 4, 00:11:17.641 "num_base_bdevs_discovered": 1, 00:11:17.641 "num_base_bdevs_operational": 4, 00:11:17.641 "base_bdevs_list": [ 00:11:17.641 { 00:11:17.641 "name": "pt1", 00:11:17.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.641 "is_configured": true, 00:11:17.641 "data_offset": 2048, 00:11:17.641 "data_size": 63488 00:11:17.641 }, 00:11:17.641 { 00:11:17.641 "name": null, 00:11:17.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.641 "is_configured": false, 00:11:17.641 "data_offset": 0, 00:11:17.641 "data_size": 63488 00:11:17.641 }, 00:11:17.641 { 00:11:17.641 "name": null, 00:11:17.641 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.641 "is_configured": false, 00:11:17.641 "data_offset": 2048, 00:11:17.641 "data_size": 63488 00:11:17.641 }, 00:11:17.641 { 00:11:17.641 "name": null, 00:11:17.641 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.641 "is_configured": false, 00:11:17.641 "data_offset": 2048, 00:11:17.641 "data_size": 63488 00:11:17.641 } 00:11:17.641 ] 00:11:17.641 }' 00:11:17.641 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.641 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.900 [2024-10-13 02:25:36.430706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:17.900 [2024-10-13 02:25:36.430853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.900 [2024-10-13 02:25:36.430911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:17.900 [2024-10-13 02:25:36.430948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.900 [2024-10-13 02:25:36.431470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.900 [2024-10-13 02:25:36.431531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:17.900 [2024-10-13 02:25:36.431641] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:17.900 [2024-10-13 02:25:36.431699] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:17.900 pt2 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.900 [2024-10-13 02:25:36.442630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:17.900 [2024-10-13 02:25:36.442720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.900 [2024-10-13 02:25:36.442754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:17.900 [2024-10-13 02:25:36.442795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.900 [2024-10-13 02:25:36.443206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.900 [2024-10-13 02:25:36.443266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:17.900 [2024-10-13 02:25:36.443358] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:17.900 [2024-10-13 02:25:36.443409] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:17.900 pt3 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.900 [2024-10-13 02:25:36.454633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:17.900 [2024-10-13 02:25:36.454681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.900 [2024-10-13 02:25:36.454695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:17.900 [2024-10-13 02:25:36.454705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.900 [2024-10-13 02:25:36.455030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.900 [2024-10-13 02:25:36.455051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:17.900 [2024-10-13 02:25:36.455098] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:17.900 [2024-10-13 02:25:36.455116] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:17.900 [2024-10-13 02:25:36.455215] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:11:17.900 [2024-10-13 02:25:36.455227] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:17.900 [2024-10-13 02:25:36.455482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:17.900 [2024-10-13 02:25:36.455599] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:11:17.900 [2024-10-13 02:25:36.455607] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:11:17.900 [2024-10-13 02:25:36.455704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.900 pt4 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.900 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.901 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.901 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.901 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.901 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.901 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.901 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.901 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.901 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.901 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.901 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.901 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.901 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.901 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.901 "name": "raid_bdev1", 00:11:17.901 "uuid": "8dfc321d-ddab-462c-a46f-ca1040b336c4", 00:11:17.901 "strip_size_kb": 64, 00:11:17.901 "state": "online", 00:11:17.901 "raid_level": "concat", 00:11:17.901 "superblock": true, 00:11:17.901 "num_base_bdevs": 4, 00:11:17.901 "num_base_bdevs_discovered": 4, 00:11:17.901 "num_base_bdevs_operational": 4, 00:11:17.901 "base_bdevs_list": [ 00:11:17.901 { 00:11:17.901 "name": "pt1", 00:11:17.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.901 "is_configured": true, 00:11:17.901 "data_offset": 2048, 00:11:17.901 "data_size": 63488 00:11:17.901 }, 00:11:17.901 { 00:11:17.901 "name": "pt2", 00:11:17.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.901 "is_configured": true, 00:11:17.901 "data_offset": 2048, 00:11:17.901 "data_size": 63488 00:11:17.901 }, 00:11:17.901 { 00:11:17.901 "name": "pt3", 00:11:17.901 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.901 "is_configured": true, 00:11:17.901 "data_offset": 2048, 00:11:17.901 "data_size": 63488 00:11:17.901 }, 00:11:17.901 { 00:11:17.901 "name": "pt4", 00:11:17.901 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.901 "is_configured": true, 00:11:17.901 "data_offset": 2048, 00:11:17.901 "data_size": 63488 00:11:17.901 } 00:11:17.901 ] 00:11:17.901 }' 00:11:17.901 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.901 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.471 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:18.471 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:18.471 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:18.471 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:18.471 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:18.471 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:18.471 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:18.471 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:18.471 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.471 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.471 [2024-10-13 02:25:36.926242] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.471 02:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.471 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:18.471 "name": "raid_bdev1", 00:11:18.471 "aliases": [ 00:11:18.471 "8dfc321d-ddab-462c-a46f-ca1040b336c4" 00:11:18.471 ], 00:11:18.471 "product_name": "Raid Volume", 00:11:18.471 "block_size": 512, 00:11:18.471 "num_blocks": 253952, 00:11:18.471 "uuid": "8dfc321d-ddab-462c-a46f-ca1040b336c4", 00:11:18.471 "assigned_rate_limits": { 00:11:18.471 "rw_ios_per_sec": 0, 00:11:18.471 "rw_mbytes_per_sec": 0, 00:11:18.471 "r_mbytes_per_sec": 0, 00:11:18.471 "w_mbytes_per_sec": 0 00:11:18.471 }, 00:11:18.471 "claimed": false, 00:11:18.471 "zoned": false, 00:11:18.471 "supported_io_types": { 00:11:18.471 "read": true, 00:11:18.471 "write": true, 00:11:18.471 "unmap": true, 00:11:18.471 "flush": true, 00:11:18.471 "reset": true, 00:11:18.471 "nvme_admin": false, 00:11:18.471 "nvme_io": false, 00:11:18.471 "nvme_io_md": false, 00:11:18.471 "write_zeroes": true, 00:11:18.471 "zcopy": false, 00:11:18.471 "get_zone_info": false, 00:11:18.471 "zone_management": false, 00:11:18.471 "zone_append": false, 00:11:18.471 "compare": false, 00:11:18.471 "compare_and_write": false, 00:11:18.471 "abort": false, 00:11:18.471 "seek_hole": false, 00:11:18.471 "seek_data": false, 00:11:18.471 "copy": false, 00:11:18.471 "nvme_iov_md": false 00:11:18.471 }, 00:11:18.471 "memory_domains": [ 00:11:18.471 { 00:11:18.471 "dma_device_id": "system", 00:11:18.471 "dma_device_type": 1 00:11:18.471 }, 00:11:18.471 { 00:11:18.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.471 "dma_device_type": 2 00:11:18.471 }, 00:11:18.471 { 00:11:18.471 "dma_device_id": "system", 00:11:18.471 "dma_device_type": 1 00:11:18.471 }, 00:11:18.471 { 00:11:18.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.471 "dma_device_type": 2 00:11:18.471 }, 00:11:18.471 { 00:11:18.471 "dma_device_id": "system", 00:11:18.471 "dma_device_type": 1 00:11:18.471 }, 00:11:18.471 { 00:11:18.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.471 "dma_device_type": 2 00:11:18.471 }, 00:11:18.471 { 00:11:18.471 "dma_device_id": "system", 00:11:18.471 "dma_device_type": 1 00:11:18.471 }, 00:11:18.471 { 00:11:18.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.471 "dma_device_type": 2 00:11:18.471 } 00:11:18.471 ], 00:11:18.471 "driver_specific": { 00:11:18.471 "raid": { 00:11:18.471 "uuid": "8dfc321d-ddab-462c-a46f-ca1040b336c4", 00:11:18.471 "strip_size_kb": 64, 00:11:18.471 "state": "online", 00:11:18.471 "raid_level": "concat", 00:11:18.471 "superblock": true, 00:11:18.471 "num_base_bdevs": 4, 00:11:18.471 "num_base_bdevs_discovered": 4, 00:11:18.471 "num_base_bdevs_operational": 4, 00:11:18.471 "base_bdevs_list": [ 00:11:18.471 { 00:11:18.471 "name": "pt1", 00:11:18.471 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.471 "is_configured": true, 00:11:18.471 "data_offset": 2048, 00:11:18.471 "data_size": 63488 00:11:18.471 }, 00:11:18.471 { 00:11:18.471 "name": "pt2", 00:11:18.471 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.471 "is_configured": true, 00:11:18.471 "data_offset": 2048, 00:11:18.471 "data_size": 63488 00:11:18.471 }, 00:11:18.471 { 00:11:18.471 "name": "pt3", 00:11:18.471 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.471 "is_configured": true, 00:11:18.471 "data_offset": 2048, 00:11:18.471 "data_size": 63488 00:11:18.471 }, 00:11:18.471 { 00:11:18.471 "name": "pt4", 00:11:18.471 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:18.471 "is_configured": true, 00:11:18.471 "data_offset": 2048, 00:11:18.471 "data_size": 63488 00:11:18.471 } 00:11:18.471 ] 00:11:18.471 } 00:11:18.471 } 00:11:18.471 }' 00:11:18.471 02:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:18.471 pt2 00:11:18.471 pt3 00:11:18.471 pt4' 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.471 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:18.732 [2024-10-13 02:25:37.229597] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8dfc321d-ddab-462c-a46f-ca1040b336c4 '!=' 8dfc321d-ddab-462c-a46f-ca1040b336c4 ']' 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83378 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83378 ']' 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83378 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83378 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:18.732 killing process with pid 83378 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83378' 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83378 00:11:18.732 [2024-10-13 02:25:37.318812] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:18.732 [2024-10-13 02:25:37.318978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.732 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83378 00:11:18.732 [2024-10-13 02:25:37.319065] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.732 [2024-10-13 02:25:37.319080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:11:18.732 [2024-10-13 02:25:37.399078] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:19.323 02:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:19.323 00:11:19.323 real 0m4.305s 00:11:19.323 user 0m6.548s 00:11:19.323 sys 0m1.033s 00:11:19.323 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:19.323 02:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.323 ************************************ 00:11:19.323 END TEST raid_superblock_test 00:11:19.323 ************************************ 00:11:19.323 02:25:37 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:19.323 02:25:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:19.323 02:25:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.323 02:25:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:19.323 ************************************ 00:11:19.323 START TEST raid_read_error_test 00:11:19.323 ************************************ 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XGrOIcKU5K 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83633 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83633 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 83633 ']' 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.323 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:19.324 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.324 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:19.324 02:25:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.324 [2024-10-13 02:25:37.948710] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:19.324 [2024-10-13 02:25:37.948968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83633 ] 00:11:19.582 [2024-10-13 02:25:38.095254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.582 [2024-10-13 02:25:38.164897] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.582 [2024-10-13 02:25:38.243503] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.582 [2024-10-13 02:25:38.243541] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.151 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:20.151 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:20.151 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:20.151 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:20.151 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.151 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.151 BaseBdev1_malloc 00:11:20.151 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.151 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:20.151 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.151 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.151 true 00:11:20.151 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.151 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:20.151 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.151 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.151 [2024-10-13 02:25:38.815118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:20.151 [2024-10-13 02:25:38.815272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.151 [2024-10-13 02:25:38.815316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:11:20.151 [2024-10-13 02:25:38.815349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.151 [2024-10-13 02:25:38.817842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.151 [2024-10-13 02:25:38.817930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:20.151 BaseBdev1 00:11:20.151 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.152 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:20.152 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:20.152 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.152 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.412 BaseBdev2_malloc 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.412 true 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.412 [2024-10-13 02:25:38.873474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:20.412 [2024-10-13 02:25:38.873540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.412 [2024-10-13 02:25:38.873563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:11:20.412 [2024-10-13 02:25:38.873571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.412 [2024-10-13 02:25:38.876012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.412 [2024-10-13 02:25:38.876109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:20.412 BaseBdev2 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.412 BaseBdev3_malloc 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.412 true 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.412 [2024-10-13 02:25:38.920794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:20.412 [2024-10-13 02:25:38.920843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.412 [2024-10-13 02:25:38.920878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:11:20.412 [2024-10-13 02:25:38.920899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.412 [2024-10-13 02:25:38.923273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.412 [2024-10-13 02:25:38.923350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:20.412 BaseBdev3 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.412 BaseBdev4_malloc 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.412 true 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.412 [2024-10-13 02:25:38.967274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:20.412 [2024-10-13 02:25:38.967323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.412 [2024-10-13 02:25:38.967345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:20.412 [2024-10-13 02:25:38.967353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.412 [2024-10-13 02:25:38.969746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.412 [2024-10-13 02:25:38.969783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:20.412 BaseBdev4 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.412 [2024-10-13 02:25:38.979324] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.412 [2024-10-13 02:25:38.981477] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.412 [2024-10-13 02:25:38.981644] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.412 [2024-10-13 02:25:38.981714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:20.412 [2024-10-13 02:25:38.981932] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:11:20.412 [2024-10-13 02:25:38.981945] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:20.412 [2024-10-13 02:25:38.982193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:11:20.412 [2024-10-13 02:25:38.982336] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:11:20.412 [2024-10-13 02:25:38.982350] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:11:20.412 [2024-10-13 02:25:38.982472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.412 02:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.412 02:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.412 02:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.412 "name": "raid_bdev1", 00:11:20.412 "uuid": "3882baa9-4529-4f3a-b5ae-87352bbfc4cf", 00:11:20.412 "strip_size_kb": 64, 00:11:20.412 "state": "online", 00:11:20.412 "raid_level": "concat", 00:11:20.412 "superblock": true, 00:11:20.412 "num_base_bdevs": 4, 00:11:20.412 "num_base_bdevs_discovered": 4, 00:11:20.412 "num_base_bdevs_operational": 4, 00:11:20.412 "base_bdevs_list": [ 00:11:20.412 { 00:11:20.412 "name": "BaseBdev1", 00:11:20.412 "uuid": "006dc650-9b04-52d2-a148-478911854ac0", 00:11:20.412 "is_configured": true, 00:11:20.412 "data_offset": 2048, 00:11:20.412 "data_size": 63488 00:11:20.412 }, 00:11:20.412 { 00:11:20.412 "name": "BaseBdev2", 00:11:20.412 "uuid": "4bf0684c-16ce-50ec-9db7-23209b2a2e19", 00:11:20.412 "is_configured": true, 00:11:20.412 "data_offset": 2048, 00:11:20.412 "data_size": 63488 00:11:20.412 }, 00:11:20.412 { 00:11:20.413 "name": "BaseBdev3", 00:11:20.413 "uuid": "a522fdc4-243c-56b0-a7c8-7370f7043459", 00:11:20.413 "is_configured": true, 00:11:20.413 "data_offset": 2048, 00:11:20.413 "data_size": 63488 00:11:20.413 }, 00:11:20.413 { 00:11:20.413 "name": "BaseBdev4", 00:11:20.413 "uuid": "448b17f7-5f78-5a8f-a826-007e7dddee48", 00:11:20.413 "is_configured": true, 00:11:20.413 "data_offset": 2048, 00:11:20.413 "data_size": 63488 00:11:20.413 } 00:11:20.413 ] 00:11:20.413 }' 00:11:20.413 02:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.413 02:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.982 02:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:20.982 02:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:20.982 [2024-10-13 02:25:39.459087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.921 "name": "raid_bdev1", 00:11:21.921 "uuid": "3882baa9-4529-4f3a-b5ae-87352bbfc4cf", 00:11:21.921 "strip_size_kb": 64, 00:11:21.921 "state": "online", 00:11:21.921 "raid_level": "concat", 00:11:21.921 "superblock": true, 00:11:21.921 "num_base_bdevs": 4, 00:11:21.921 "num_base_bdevs_discovered": 4, 00:11:21.921 "num_base_bdevs_operational": 4, 00:11:21.921 "base_bdevs_list": [ 00:11:21.921 { 00:11:21.921 "name": "BaseBdev1", 00:11:21.921 "uuid": "006dc650-9b04-52d2-a148-478911854ac0", 00:11:21.921 "is_configured": true, 00:11:21.921 "data_offset": 2048, 00:11:21.921 "data_size": 63488 00:11:21.921 }, 00:11:21.921 { 00:11:21.921 "name": "BaseBdev2", 00:11:21.921 "uuid": "4bf0684c-16ce-50ec-9db7-23209b2a2e19", 00:11:21.921 "is_configured": true, 00:11:21.921 "data_offset": 2048, 00:11:21.921 "data_size": 63488 00:11:21.921 }, 00:11:21.921 { 00:11:21.921 "name": "BaseBdev3", 00:11:21.921 "uuid": "a522fdc4-243c-56b0-a7c8-7370f7043459", 00:11:21.921 "is_configured": true, 00:11:21.921 "data_offset": 2048, 00:11:21.921 "data_size": 63488 00:11:21.921 }, 00:11:21.921 { 00:11:21.921 "name": "BaseBdev4", 00:11:21.921 "uuid": "448b17f7-5f78-5a8f-a826-007e7dddee48", 00:11:21.921 "is_configured": true, 00:11:21.921 "data_offset": 2048, 00:11:21.921 "data_size": 63488 00:11:21.921 } 00:11:21.921 ] 00:11:21.921 }' 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.921 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.181 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:22.181 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.181 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.181 [2024-10-13 02:25:40.852036] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:22.181 [2024-10-13 02:25:40.852072] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.181 [2024-10-13 02:25:40.854591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.181 [2024-10-13 02:25:40.854646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.181 [2024-10-13 02:25:40.854698] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.181 [2024-10-13 02:25:40.854706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:11:22.181 { 00:11:22.181 "results": [ 00:11:22.181 { 00:11:22.181 "job": "raid_bdev1", 00:11:22.181 "core_mask": "0x1", 00:11:22.181 "workload": "randrw", 00:11:22.181 "percentage": 50, 00:11:22.181 "status": "finished", 00:11:22.181 "queue_depth": 1, 00:11:22.181 "io_size": 131072, 00:11:22.181 "runtime": 1.39338, 00:11:22.181 "iops": 14125.36422225093, 00:11:22.181 "mibps": 1765.6705277813662, 00:11:22.181 "io_failed": 1, 00:11:22.181 "io_timeout": 0, 00:11:22.181 "avg_latency_us": 99.53467756517216, 00:11:22.181 "min_latency_us": 25.4882096069869, 00:11:22.181 "max_latency_us": 1416.6078602620087 00:11:22.181 } 00:11:22.181 ], 00:11:22.181 "core_count": 1 00:11:22.181 } 00:11:22.181 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.181 02:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83633 00:11:22.181 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 83633 ']' 00:11:22.181 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 83633 00:11:22.181 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:22.441 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:22.441 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83633 00:11:22.441 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:22.441 killing process with pid 83633 00:11:22.441 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:22.441 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83633' 00:11:22.441 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 83633 00:11:22.441 [2024-10-13 02:25:40.885659] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.441 02:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 83633 00:11:22.441 [2024-10-13 02:25:40.952829] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.701 02:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:22.701 02:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XGrOIcKU5K 00:11:22.701 02:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:22.701 ************************************ 00:11:22.701 END TEST raid_read_error_test 00:11:22.701 ************************************ 00:11:22.701 02:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:22.701 02:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:22.701 02:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:22.701 02:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:22.701 02:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:22.701 00:11:22.701 real 0m3.498s 00:11:22.701 user 0m4.190s 00:11:22.701 sys 0m0.647s 00:11:22.701 02:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.701 02:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.961 02:25:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:22.961 02:25:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:22.961 02:25:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.961 02:25:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.961 ************************************ 00:11:22.961 START TEST raid_write_error_test 00:11:22.961 ************************************ 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6T6H602vb2 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83764 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83764 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 83764 ']' 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:22.961 02:25:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.961 [2024-10-13 02:25:41.521218] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:22.961 [2024-10-13 02:25:41.521433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83764 ] 00:11:23.221 [2024-10-13 02:25:41.667501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.221 [2024-10-13 02:25:41.744926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.221 [2024-10-13 02:25:41.821564] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.221 [2024-10-13 02:25:41.821613] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.790 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.790 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:23.790 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.790 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:23.790 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.790 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.790 BaseBdev1_malloc 00:11:23.790 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.790 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:23.790 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.790 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.790 true 00:11:23.790 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.791 [2024-10-13 02:25:42.388902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:23.791 [2024-10-13 02:25:42.388982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.791 [2024-10-13 02:25:42.389010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:11:23.791 [2024-10-13 02:25:42.389022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.791 [2024-10-13 02:25:42.391542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.791 [2024-10-13 02:25:42.391646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:23.791 BaseBdev1 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.791 BaseBdev2_malloc 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.791 true 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.791 [2024-10-13 02:25:42.445180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:23.791 [2024-10-13 02:25:42.445254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.791 [2024-10-13 02:25:42.445277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:11:23.791 [2024-10-13 02:25:42.445285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.791 [2024-10-13 02:25:42.447765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.791 [2024-10-13 02:25:42.447863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:23.791 BaseBdev2 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.791 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.051 BaseBdev3_malloc 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.051 true 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.051 [2024-10-13 02:25:42.492038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:24.051 [2024-10-13 02:25:42.492142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.051 [2024-10-13 02:25:42.492183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:11:24.051 [2024-10-13 02:25:42.492193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.051 [2024-10-13 02:25:42.494652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.051 [2024-10-13 02:25:42.494686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:24.051 BaseBdev3 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.051 BaseBdev4_malloc 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.051 true 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.051 [2024-10-13 02:25:42.538723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:24.051 [2024-10-13 02:25:42.538813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.051 [2024-10-13 02:25:42.538847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:24.051 [2024-10-13 02:25:42.538857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.051 [2024-10-13 02:25:42.541352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.051 [2024-10-13 02:25:42.541389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:24.051 BaseBdev4 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.051 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.051 [2024-10-13 02:25:42.550776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.051 [2024-10-13 02:25:42.552973] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.051 [2024-10-13 02:25:42.553120] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.051 [2024-10-13 02:25:42.553195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:24.051 [2024-10-13 02:25:42.553400] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:11:24.051 [2024-10-13 02:25:42.553412] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:24.052 [2024-10-13 02:25:42.553683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:11:24.052 [2024-10-13 02:25:42.553823] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:11:24.052 [2024-10-13 02:25:42.553842] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:11:24.052 [2024-10-13 02:25:42.554021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.052 "name": "raid_bdev1", 00:11:24.052 "uuid": "f5fffc40-f0b0-434b-a3fa-c5645052bbe2", 00:11:24.052 "strip_size_kb": 64, 00:11:24.052 "state": "online", 00:11:24.052 "raid_level": "concat", 00:11:24.052 "superblock": true, 00:11:24.052 "num_base_bdevs": 4, 00:11:24.052 "num_base_bdevs_discovered": 4, 00:11:24.052 "num_base_bdevs_operational": 4, 00:11:24.052 "base_bdevs_list": [ 00:11:24.052 { 00:11:24.052 "name": "BaseBdev1", 00:11:24.052 "uuid": "2b39963a-4721-53e2-9627-66ece6f94745", 00:11:24.052 "is_configured": true, 00:11:24.052 "data_offset": 2048, 00:11:24.052 "data_size": 63488 00:11:24.052 }, 00:11:24.052 { 00:11:24.052 "name": "BaseBdev2", 00:11:24.052 "uuid": "cec1fa06-4ed5-55b3-b50a-ceaf2a1f1ed0", 00:11:24.052 "is_configured": true, 00:11:24.052 "data_offset": 2048, 00:11:24.052 "data_size": 63488 00:11:24.052 }, 00:11:24.052 { 00:11:24.052 "name": "BaseBdev3", 00:11:24.052 "uuid": "2b022a9c-2e59-53fb-86e7-23f8232dfeb6", 00:11:24.052 "is_configured": true, 00:11:24.052 "data_offset": 2048, 00:11:24.052 "data_size": 63488 00:11:24.052 }, 00:11:24.052 { 00:11:24.052 "name": "BaseBdev4", 00:11:24.052 "uuid": "018feb23-48eb-5324-8cdc-3d0b11a5d554", 00:11:24.052 "is_configured": true, 00:11:24.052 "data_offset": 2048, 00:11:24.052 "data_size": 63488 00:11:24.052 } 00:11:24.052 ] 00:11:24.052 }' 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.052 02:25:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.621 02:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:24.621 02:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:24.621 [2024-10-13 02:25:43.126346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.560 "name": "raid_bdev1", 00:11:25.560 "uuid": "f5fffc40-f0b0-434b-a3fa-c5645052bbe2", 00:11:25.560 "strip_size_kb": 64, 00:11:25.560 "state": "online", 00:11:25.560 "raid_level": "concat", 00:11:25.560 "superblock": true, 00:11:25.560 "num_base_bdevs": 4, 00:11:25.560 "num_base_bdevs_discovered": 4, 00:11:25.560 "num_base_bdevs_operational": 4, 00:11:25.560 "base_bdevs_list": [ 00:11:25.560 { 00:11:25.560 "name": "BaseBdev1", 00:11:25.560 "uuid": "2b39963a-4721-53e2-9627-66ece6f94745", 00:11:25.560 "is_configured": true, 00:11:25.560 "data_offset": 2048, 00:11:25.560 "data_size": 63488 00:11:25.560 }, 00:11:25.560 { 00:11:25.560 "name": "BaseBdev2", 00:11:25.560 "uuid": "cec1fa06-4ed5-55b3-b50a-ceaf2a1f1ed0", 00:11:25.560 "is_configured": true, 00:11:25.560 "data_offset": 2048, 00:11:25.560 "data_size": 63488 00:11:25.560 }, 00:11:25.560 { 00:11:25.560 "name": "BaseBdev3", 00:11:25.560 "uuid": "2b022a9c-2e59-53fb-86e7-23f8232dfeb6", 00:11:25.560 "is_configured": true, 00:11:25.560 "data_offset": 2048, 00:11:25.560 "data_size": 63488 00:11:25.560 }, 00:11:25.560 { 00:11:25.560 "name": "BaseBdev4", 00:11:25.560 "uuid": "018feb23-48eb-5324-8cdc-3d0b11a5d554", 00:11:25.560 "is_configured": true, 00:11:25.560 "data_offset": 2048, 00:11:25.560 "data_size": 63488 00:11:25.560 } 00:11:25.560 ] 00:11:25.560 }' 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.560 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.147 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:26.147 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.147 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.147 [2024-10-13 02:25:44.544307] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.147 [2024-10-13 02:25:44.544401] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.147 [2024-10-13 02:25:44.546962] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.147 [2024-10-13 02:25:44.547074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.147 [2024-10-13 02:25:44.547152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.147 [2024-10-13 02:25:44.547198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:11:26.147 { 00:11:26.147 "results": [ 00:11:26.147 { 00:11:26.147 "job": "raid_bdev1", 00:11:26.147 "core_mask": "0x1", 00:11:26.147 "workload": "randrw", 00:11:26.147 "percentage": 50, 00:11:26.147 "status": "finished", 00:11:26.147 "queue_depth": 1, 00:11:26.147 "io_size": 131072, 00:11:26.147 "runtime": 1.418421, 00:11:26.147 "iops": 14098.071024047162, 00:11:26.147 "mibps": 1762.2588780058952, 00:11:26.147 "io_failed": 1, 00:11:26.147 "io_timeout": 0, 00:11:26.147 "avg_latency_us": 99.7396508209773, 00:11:26.147 "min_latency_us": 25.823580786026202, 00:11:26.147 "max_latency_us": 1366.5257641921398 00:11:26.147 } 00:11:26.147 ], 00:11:26.147 "core_count": 1 00:11:26.147 } 00:11:26.147 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.147 02:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83764 00:11:26.147 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 83764 ']' 00:11:26.147 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 83764 00:11:26.147 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:26.147 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.147 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83764 00:11:26.147 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:26.147 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:26.147 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83764' 00:11:26.147 killing process with pid 83764 00:11:26.147 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 83764 00:11:26.147 [2024-10-13 02:25:44.590836] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:26.147 02:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 83764 00:11:26.147 [2024-10-13 02:25:44.655346] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:26.427 02:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6T6H602vb2 00:11:26.427 02:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:26.427 02:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:26.427 ************************************ 00:11:26.427 END TEST raid_write_error_test 00:11:26.427 ************************************ 00:11:26.427 02:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:26.427 02:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:26.427 02:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:26.427 02:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:26.427 02:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:26.427 00:11:26.427 real 0m3.625s 00:11:26.427 user 0m4.458s 00:11:26.427 sys 0m0.657s 00:11:26.427 02:25:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.427 02:25:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.427 02:25:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:26.427 02:25:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:26.427 02:25:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:26.427 02:25:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:26.427 02:25:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:26.687 ************************************ 00:11:26.687 START TEST raid_state_function_test 00:11:26.687 ************************************ 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83902 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83902' 00:11:26.687 Process raid pid: 83902 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83902 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 83902 ']' 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:26.687 02:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.687 [2024-10-13 02:25:45.211481] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:26.687 [2024-10-13 02:25:45.211731] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.687 [2024-10-13 02:25:45.359907] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.947 [2024-10-13 02:25:45.430407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.947 [2024-10-13 02:25:45.507223] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.947 [2024-10-13 02:25:45.507369] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.518 [2024-10-13 02:25:46.047057] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.518 [2024-10-13 02:25:46.047112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.518 [2024-10-13 02:25:46.047125] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.518 [2024-10-13 02:25:46.047135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.518 [2024-10-13 02:25:46.047142] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:27.518 [2024-10-13 02:25:46.047155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:27.518 [2024-10-13 02:25:46.047161] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:27.518 [2024-10-13 02:25:46.047171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.518 "name": "Existed_Raid", 00:11:27.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.518 "strip_size_kb": 0, 00:11:27.518 "state": "configuring", 00:11:27.518 "raid_level": "raid1", 00:11:27.518 "superblock": false, 00:11:27.518 "num_base_bdevs": 4, 00:11:27.518 "num_base_bdevs_discovered": 0, 00:11:27.518 "num_base_bdevs_operational": 4, 00:11:27.518 "base_bdevs_list": [ 00:11:27.518 { 00:11:27.518 "name": "BaseBdev1", 00:11:27.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.518 "is_configured": false, 00:11:27.518 "data_offset": 0, 00:11:27.518 "data_size": 0 00:11:27.518 }, 00:11:27.518 { 00:11:27.518 "name": "BaseBdev2", 00:11:27.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.518 "is_configured": false, 00:11:27.518 "data_offset": 0, 00:11:27.518 "data_size": 0 00:11:27.518 }, 00:11:27.518 { 00:11:27.518 "name": "BaseBdev3", 00:11:27.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.518 "is_configured": false, 00:11:27.518 "data_offset": 0, 00:11:27.518 "data_size": 0 00:11:27.518 }, 00:11:27.518 { 00:11:27.518 "name": "BaseBdev4", 00:11:27.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.518 "is_configured": false, 00:11:27.518 "data_offset": 0, 00:11:27.518 "data_size": 0 00:11:27.518 } 00:11:27.518 ] 00:11:27.518 }' 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.518 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.778 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:27.778 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.778 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.778 [2024-10-13 02:25:46.458142] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:27.778 [2024-10-13 02:25:46.458251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.038 [2024-10-13 02:25:46.470125] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.038 [2024-10-13 02:25:46.470210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.038 [2024-10-13 02:25:46.470238] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.038 [2024-10-13 02:25:46.470261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.038 [2024-10-13 02:25:46.470279] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:28.038 [2024-10-13 02:25:46.470300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:28.038 [2024-10-13 02:25:46.470317] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:28.038 [2024-10-13 02:25:46.470339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.038 [2024-10-13 02:25:46.497398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.038 BaseBdev1 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.038 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.038 [ 00:11:28.038 { 00:11:28.038 "name": "BaseBdev1", 00:11:28.038 "aliases": [ 00:11:28.038 "ec92aa04-fda5-4d73-89b8-58c58448124c" 00:11:28.038 ], 00:11:28.038 "product_name": "Malloc disk", 00:11:28.038 "block_size": 512, 00:11:28.038 "num_blocks": 65536, 00:11:28.038 "uuid": "ec92aa04-fda5-4d73-89b8-58c58448124c", 00:11:28.038 "assigned_rate_limits": { 00:11:28.038 "rw_ios_per_sec": 0, 00:11:28.039 "rw_mbytes_per_sec": 0, 00:11:28.039 "r_mbytes_per_sec": 0, 00:11:28.039 "w_mbytes_per_sec": 0 00:11:28.039 }, 00:11:28.039 "claimed": true, 00:11:28.039 "claim_type": "exclusive_write", 00:11:28.039 "zoned": false, 00:11:28.039 "supported_io_types": { 00:11:28.039 "read": true, 00:11:28.039 "write": true, 00:11:28.039 "unmap": true, 00:11:28.039 "flush": true, 00:11:28.039 "reset": true, 00:11:28.039 "nvme_admin": false, 00:11:28.039 "nvme_io": false, 00:11:28.039 "nvme_io_md": false, 00:11:28.039 "write_zeroes": true, 00:11:28.039 "zcopy": true, 00:11:28.039 "get_zone_info": false, 00:11:28.039 "zone_management": false, 00:11:28.039 "zone_append": false, 00:11:28.039 "compare": false, 00:11:28.039 "compare_and_write": false, 00:11:28.039 "abort": true, 00:11:28.039 "seek_hole": false, 00:11:28.039 "seek_data": false, 00:11:28.039 "copy": true, 00:11:28.039 "nvme_iov_md": false 00:11:28.039 }, 00:11:28.039 "memory_domains": [ 00:11:28.039 { 00:11:28.039 "dma_device_id": "system", 00:11:28.039 "dma_device_type": 1 00:11:28.039 }, 00:11:28.039 { 00:11:28.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.039 "dma_device_type": 2 00:11:28.039 } 00:11:28.039 ], 00:11:28.039 "driver_specific": {} 00:11:28.039 } 00:11:28.039 ] 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.039 "name": "Existed_Raid", 00:11:28.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.039 "strip_size_kb": 0, 00:11:28.039 "state": "configuring", 00:11:28.039 "raid_level": "raid1", 00:11:28.039 "superblock": false, 00:11:28.039 "num_base_bdevs": 4, 00:11:28.039 "num_base_bdevs_discovered": 1, 00:11:28.039 "num_base_bdevs_operational": 4, 00:11:28.039 "base_bdevs_list": [ 00:11:28.039 { 00:11:28.039 "name": "BaseBdev1", 00:11:28.039 "uuid": "ec92aa04-fda5-4d73-89b8-58c58448124c", 00:11:28.039 "is_configured": true, 00:11:28.039 "data_offset": 0, 00:11:28.039 "data_size": 65536 00:11:28.039 }, 00:11:28.039 { 00:11:28.039 "name": "BaseBdev2", 00:11:28.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.039 "is_configured": false, 00:11:28.039 "data_offset": 0, 00:11:28.039 "data_size": 0 00:11:28.039 }, 00:11:28.039 { 00:11:28.039 "name": "BaseBdev3", 00:11:28.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.039 "is_configured": false, 00:11:28.039 "data_offset": 0, 00:11:28.039 "data_size": 0 00:11:28.039 }, 00:11:28.039 { 00:11:28.039 "name": "BaseBdev4", 00:11:28.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.039 "is_configured": false, 00:11:28.039 "data_offset": 0, 00:11:28.039 "data_size": 0 00:11:28.039 } 00:11:28.039 ] 00:11:28.039 }' 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.039 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.300 [2024-10-13 02:25:46.876778] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:28.300 [2024-10-13 02:25:46.876843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.300 [2024-10-13 02:25:46.888815] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.300 [2024-10-13 02:25:46.891048] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.300 [2024-10-13 02:25:46.891149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.300 [2024-10-13 02:25:46.891164] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:28.300 [2024-10-13 02:25:46.891174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:28.300 [2024-10-13 02:25:46.891181] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:28.300 [2024-10-13 02:25:46.891189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.300 "name": "Existed_Raid", 00:11:28.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.300 "strip_size_kb": 0, 00:11:28.300 "state": "configuring", 00:11:28.300 "raid_level": "raid1", 00:11:28.300 "superblock": false, 00:11:28.300 "num_base_bdevs": 4, 00:11:28.300 "num_base_bdevs_discovered": 1, 00:11:28.300 "num_base_bdevs_operational": 4, 00:11:28.300 "base_bdevs_list": [ 00:11:28.300 { 00:11:28.300 "name": "BaseBdev1", 00:11:28.300 "uuid": "ec92aa04-fda5-4d73-89b8-58c58448124c", 00:11:28.300 "is_configured": true, 00:11:28.300 "data_offset": 0, 00:11:28.300 "data_size": 65536 00:11:28.300 }, 00:11:28.300 { 00:11:28.300 "name": "BaseBdev2", 00:11:28.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.300 "is_configured": false, 00:11:28.300 "data_offset": 0, 00:11:28.300 "data_size": 0 00:11:28.300 }, 00:11:28.300 { 00:11:28.300 "name": "BaseBdev3", 00:11:28.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.300 "is_configured": false, 00:11:28.300 "data_offset": 0, 00:11:28.300 "data_size": 0 00:11:28.300 }, 00:11:28.300 { 00:11:28.300 "name": "BaseBdev4", 00:11:28.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.300 "is_configured": false, 00:11:28.300 "data_offset": 0, 00:11:28.300 "data_size": 0 00:11:28.300 } 00:11:28.300 ] 00:11:28.300 }' 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.300 02:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.869 [2024-10-13 02:25:47.347141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.869 BaseBdev2 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.869 [ 00:11:28.869 { 00:11:28.869 "name": "BaseBdev2", 00:11:28.869 "aliases": [ 00:11:28.869 "3abecfb5-fc1e-4642-9612-67c2f4afc08b" 00:11:28.869 ], 00:11:28.869 "product_name": "Malloc disk", 00:11:28.869 "block_size": 512, 00:11:28.869 "num_blocks": 65536, 00:11:28.869 "uuid": "3abecfb5-fc1e-4642-9612-67c2f4afc08b", 00:11:28.869 "assigned_rate_limits": { 00:11:28.869 "rw_ios_per_sec": 0, 00:11:28.869 "rw_mbytes_per_sec": 0, 00:11:28.869 "r_mbytes_per_sec": 0, 00:11:28.869 "w_mbytes_per_sec": 0 00:11:28.869 }, 00:11:28.869 "claimed": true, 00:11:28.869 "claim_type": "exclusive_write", 00:11:28.869 "zoned": false, 00:11:28.869 "supported_io_types": { 00:11:28.869 "read": true, 00:11:28.869 "write": true, 00:11:28.869 "unmap": true, 00:11:28.869 "flush": true, 00:11:28.869 "reset": true, 00:11:28.869 "nvme_admin": false, 00:11:28.869 "nvme_io": false, 00:11:28.869 "nvme_io_md": false, 00:11:28.869 "write_zeroes": true, 00:11:28.869 "zcopy": true, 00:11:28.869 "get_zone_info": false, 00:11:28.869 "zone_management": false, 00:11:28.869 "zone_append": false, 00:11:28.869 "compare": false, 00:11:28.869 "compare_and_write": false, 00:11:28.869 "abort": true, 00:11:28.869 "seek_hole": false, 00:11:28.869 "seek_data": false, 00:11:28.869 "copy": true, 00:11:28.869 "nvme_iov_md": false 00:11:28.869 }, 00:11:28.869 "memory_domains": [ 00:11:28.869 { 00:11:28.869 "dma_device_id": "system", 00:11:28.869 "dma_device_type": 1 00:11:28.869 }, 00:11:28.869 { 00:11:28.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.869 "dma_device_type": 2 00:11:28.869 } 00:11:28.869 ], 00:11:28.869 "driver_specific": {} 00:11:28.869 } 00:11:28.869 ] 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.869 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.870 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.870 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.870 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.870 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.870 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.870 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.870 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.870 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.870 "name": "Existed_Raid", 00:11:28.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.870 "strip_size_kb": 0, 00:11:28.870 "state": "configuring", 00:11:28.870 "raid_level": "raid1", 00:11:28.870 "superblock": false, 00:11:28.870 "num_base_bdevs": 4, 00:11:28.870 "num_base_bdevs_discovered": 2, 00:11:28.870 "num_base_bdevs_operational": 4, 00:11:28.870 "base_bdevs_list": [ 00:11:28.870 { 00:11:28.870 "name": "BaseBdev1", 00:11:28.870 "uuid": "ec92aa04-fda5-4d73-89b8-58c58448124c", 00:11:28.870 "is_configured": true, 00:11:28.870 "data_offset": 0, 00:11:28.870 "data_size": 65536 00:11:28.870 }, 00:11:28.870 { 00:11:28.870 "name": "BaseBdev2", 00:11:28.870 "uuid": "3abecfb5-fc1e-4642-9612-67c2f4afc08b", 00:11:28.870 "is_configured": true, 00:11:28.870 "data_offset": 0, 00:11:28.870 "data_size": 65536 00:11:28.870 }, 00:11:28.870 { 00:11:28.870 "name": "BaseBdev3", 00:11:28.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.870 "is_configured": false, 00:11:28.870 "data_offset": 0, 00:11:28.870 "data_size": 0 00:11:28.870 }, 00:11:28.870 { 00:11:28.870 "name": "BaseBdev4", 00:11:28.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.870 "is_configured": false, 00:11:28.870 "data_offset": 0, 00:11:28.870 "data_size": 0 00:11:28.870 } 00:11:28.870 ] 00:11:28.870 }' 00:11:28.870 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.870 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.437 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:29.437 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.437 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.437 [2024-10-13 02:25:47.863416] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:29.438 BaseBdev3 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.438 [ 00:11:29.438 { 00:11:29.438 "name": "BaseBdev3", 00:11:29.438 "aliases": [ 00:11:29.438 "68fe1103-9c48-48e7-a8f1-af469271a1a4" 00:11:29.438 ], 00:11:29.438 "product_name": "Malloc disk", 00:11:29.438 "block_size": 512, 00:11:29.438 "num_blocks": 65536, 00:11:29.438 "uuid": "68fe1103-9c48-48e7-a8f1-af469271a1a4", 00:11:29.438 "assigned_rate_limits": { 00:11:29.438 "rw_ios_per_sec": 0, 00:11:29.438 "rw_mbytes_per_sec": 0, 00:11:29.438 "r_mbytes_per_sec": 0, 00:11:29.438 "w_mbytes_per_sec": 0 00:11:29.438 }, 00:11:29.438 "claimed": true, 00:11:29.438 "claim_type": "exclusive_write", 00:11:29.438 "zoned": false, 00:11:29.438 "supported_io_types": { 00:11:29.438 "read": true, 00:11:29.438 "write": true, 00:11:29.438 "unmap": true, 00:11:29.438 "flush": true, 00:11:29.438 "reset": true, 00:11:29.438 "nvme_admin": false, 00:11:29.438 "nvme_io": false, 00:11:29.438 "nvme_io_md": false, 00:11:29.438 "write_zeroes": true, 00:11:29.438 "zcopy": true, 00:11:29.438 "get_zone_info": false, 00:11:29.438 "zone_management": false, 00:11:29.438 "zone_append": false, 00:11:29.438 "compare": false, 00:11:29.438 "compare_and_write": false, 00:11:29.438 "abort": true, 00:11:29.438 "seek_hole": false, 00:11:29.438 "seek_data": false, 00:11:29.438 "copy": true, 00:11:29.438 "nvme_iov_md": false 00:11:29.438 }, 00:11:29.438 "memory_domains": [ 00:11:29.438 { 00:11:29.438 "dma_device_id": "system", 00:11:29.438 "dma_device_type": 1 00:11:29.438 }, 00:11:29.438 { 00:11:29.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.438 "dma_device_type": 2 00:11:29.438 } 00:11:29.438 ], 00:11:29.438 "driver_specific": {} 00:11:29.438 } 00:11:29.438 ] 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.438 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.438 "name": "Existed_Raid", 00:11:29.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.438 "strip_size_kb": 0, 00:11:29.438 "state": "configuring", 00:11:29.438 "raid_level": "raid1", 00:11:29.438 "superblock": false, 00:11:29.438 "num_base_bdevs": 4, 00:11:29.438 "num_base_bdevs_discovered": 3, 00:11:29.438 "num_base_bdevs_operational": 4, 00:11:29.438 "base_bdevs_list": [ 00:11:29.438 { 00:11:29.438 "name": "BaseBdev1", 00:11:29.438 "uuid": "ec92aa04-fda5-4d73-89b8-58c58448124c", 00:11:29.438 "is_configured": true, 00:11:29.438 "data_offset": 0, 00:11:29.438 "data_size": 65536 00:11:29.438 }, 00:11:29.438 { 00:11:29.438 "name": "BaseBdev2", 00:11:29.438 "uuid": "3abecfb5-fc1e-4642-9612-67c2f4afc08b", 00:11:29.438 "is_configured": true, 00:11:29.438 "data_offset": 0, 00:11:29.438 "data_size": 65536 00:11:29.438 }, 00:11:29.438 { 00:11:29.439 "name": "BaseBdev3", 00:11:29.439 "uuid": "68fe1103-9c48-48e7-a8f1-af469271a1a4", 00:11:29.439 "is_configured": true, 00:11:29.439 "data_offset": 0, 00:11:29.439 "data_size": 65536 00:11:29.439 }, 00:11:29.439 { 00:11:29.439 "name": "BaseBdev4", 00:11:29.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.439 "is_configured": false, 00:11:29.439 "data_offset": 0, 00:11:29.439 "data_size": 0 00:11:29.439 } 00:11:29.439 ] 00:11:29.439 }' 00:11:29.439 02:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.439 02:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.697 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:29.697 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.697 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.697 [2024-10-13 02:25:48.319523] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:29.697 [2024-10-13 02:25:48.319662] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:11:29.697 [2024-10-13 02:25:48.319676] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:29.697 [2024-10-13 02:25:48.320054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:11:29.697 [2024-10-13 02:25:48.320226] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:11:29.697 [2024-10-13 02:25:48.320241] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:11:29.697 [2024-10-13 02:25:48.320485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.697 BaseBdev4 00:11:29.697 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.697 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:29.697 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.698 [ 00:11:29.698 { 00:11:29.698 "name": "BaseBdev4", 00:11:29.698 "aliases": [ 00:11:29.698 "6c60dcf4-d261-4bb0-8536-9dec55fc1ac6" 00:11:29.698 ], 00:11:29.698 "product_name": "Malloc disk", 00:11:29.698 "block_size": 512, 00:11:29.698 "num_blocks": 65536, 00:11:29.698 "uuid": "6c60dcf4-d261-4bb0-8536-9dec55fc1ac6", 00:11:29.698 "assigned_rate_limits": { 00:11:29.698 "rw_ios_per_sec": 0, 00:11:29.698 "rw_mbytes_per_sec": 0, 00:11:29.698 "r_mbytes_per_sec": 0, 00:11:29.698 "w_mbytes_per_sec": 0 00:11:29.698 }, 00:11:29.698 "claimed": true, 00:11:29.698 "claim_type": "exclusive_write", 00:11:29.698 "zoned": false, 00:11:29.698 "supported_io_types": { 00:11:29.698 "read": true, 00:11:29.698 "write": true, 00:11:29.698 "unmap": true, 00:11:29.698 "flush": true, 00:11:29.698 "reset": true, 00:11:29.698 "nvme_admin": false, 00:11:29.698 "nvme_io": false, 00:11:29.698 "nvme_io_md": false, 00:11:29.698 "write_zeroes": true, 00:11:29.698 "zcopy": true, 00:11:29.698 "get_zone_info": false, 00:11:29.698 "zone_management": false, 00:11:29.698 "zone_append": false, 00:11:29.698 "compare": false, 00:11:29.698 "compare_and_write": false, 00:11:29.698 "abort": true, 00:11:29.698 "seek_hole": false, 00:11:29.698 "seek_data": false, 00:11:29.698 "copy": true, 00:11:29.698 "nvme_iov_md": false 00:11:29.698 }, 00:11:29.698 "memory_domains": [ 00:11:29.698 { 00:11:29.698 "dma_device_id": "system", 00:11:29.698 "dma_device_type": 1 00:11:29.698 }, 00:11:29.698 { 00:11:29.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.698 "dma_device_type": 2 00:11:29.698 } 00:11:29.698 ], 00:11:29.698 "driver_specific": {} 00:11:29.698 } 00:11:29.698 ] 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.698 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.958 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.958 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.958 "name": "Existed_Raid", 00:11:29.958 "uuid": "e8853537-ae33-4f9e-bd9c-56f2a8691a6f", 00:11:29.958 "strip_size_kb": 0, 00:11:29.958 "state": "online", 00:11:29.958 "raid_level": "raid1", 00:11:29.958 "superblock": false, 00:11:29.958 "num_base_bdevs": 4, 00:11:29.958 "num_base_bdevs_discovered": 4, 00:11:29.958 "num_base_bdevs_operational": 4, 00:11:29.958 "base_bdevs_list": [ 00:11:29.958 { 00:11:29.958 "name": "BaseBdev1", 00:11:29.958 "uuid": "ec92aa04-fda5-4d73-89b8-58c58448124c", 00:11:29.958 "is_configured": true, 00:11:29.958 "data_offset": 0, 00:11:29.958 "data_size": 65536 00:11:29.958 }, 00:11:29.958 { 00:11:29.958 "name": "BaseBdev2", 00:11:29.958 "uuid": "3abecfb5-fc1e-4642-9612-67c2f4afc08b", 00:11:29.958 "is_configured": true, 00:11:29.958 "data_offset": 0, 00:11:29.958 "data_size": 65536 00:11:29.958 }, 00:11:29.958 { 00:11:29.958 "name": "BaseBdev3", 00:11:29.958 "uuid": "68fe1103-9c48-48e7-a8f1-af469271a1a4", 00:11:29.958 "is_configured": true, 00:11:29.958 "data_offset": 0, 00:11:29.958 "data_size": 65536 00:11:29.958 }, 00:11:29.958 { 00:11:29.958 "name": "BaseBdev4", 00:11:29.958 "uuid": "6c60dcf4-d261-4bb0-8536-9dec55fc1ac6", 00:11:29.958 "is_configured": true, 00:11:29.958 "data_offset": 0, 00:11:29.958 "data_size": 65536 00:11:29.958 } 00:11:29.958 ] 00:11:29.958 }' 00:11:29.958 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.958 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.217 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:30.217 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:30.217 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:30.217 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:30.217 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:30.217 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:30.217 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:30.217 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:30.217 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.217 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.217 [2024-10-13 02:25:48.795285] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.217 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.217 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:30.217 "name": "Existed_Raid", 00:11:30.217 "aliases": [ 00:11:30.217 "e8853537-ae33-4f9e-bd9c-56f2a8691a6f" 00:11:30.217 ], 00:11:30.217 "product_name": "Raid Volume", 00:11:30.217 "block_size": 512, 00:11:30.217 "num_blocks": 65536, 00:11:30.217 "uuid": "e8853537-ae33-4f9e-bd9c-56f2a8691a6f", 00:11:30.217 "assigned_rate_limits": { 00:11:30.217 "rw_ios_per_sec": 0, 00:11:30.217 "rw_mbytes_per_sec": 0, 00:11:30.217 "r_mbytes_per_sec": 0, 00:11:30.217 "w_mbytes_per_sec": 0 00:11:30.217 }, 00:11:30.217 "claimed": false, 00:11:30.217 "zoned": false, 00:11:30.217 "supported_io_types": { 00:11:30.217 "read": true, 00:11:30.217 "write": true, 00:11:30.217 "unmap": false, 00:11:30.217 "flush": false, 00:11:30.217 "reset": true, 00:11:30.217 "nvme_admin": false, 00:11:30.217 "nvme_io": false, 00:11:30.217 "nvme_io_md": false, 00:11:30.217 "write_zeroes": true, 00:11:30.217 "zcopy": false, 00:11:30.217 "get_zone_info": false, 00:11:30.217 "zone_management": false, 00:11:30.217 "zone_append": false, 00:11:30.217 "compare": false, 00:11:30.217 "compare_and_write": false, 00:11:30.217 "abort": false, 00:11:30.217 "seek_hole": false, 00:11:30.217 "seek_data": false, 00:11:30.217 "copy": false, 00:11:30.217 "nvme_iov_md": false 00:11:30.217 }, 00:11:30.217 "memory_domains": [ 00:11:30.217 { 00:11:30.217 "dma_device_id": "system", 00:11:30.217 "dma_device_type": 1 00:11:30.217 }, 00:11:30.217 { 00:11:30.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.218 "dma_device_type": 2 00:11:30.218 }, 00:11:30.218 { 00:11:30.218 "dma_device_id": "system", 00:11:30.218 "dma_device_type": 1 00:11:30.218 }, 00:11:30.218 { 00:11:30.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.218 "dma_device_type": 2 00:11:30.218 }, 00:11:30.218 { 00:11:30.218 "dma_device_id": "system", 00:11:30.218 "dma_device_type": 1 00:11:30.218 }, 00:11:30.218 { 00:11:30.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.218 "dma_device_type": 2 00:11:30.218 }, 00:11:30.218 { 00:11:30.218 "dma_device_id": "system", 00:11:30.218 "dma_device_type": 1 00:11:30.218 }, 00:11:30.218 { 00:11:30.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.218 "dma_device_type": 2 00:11:30.218 } 00:11:30.218 ], 00:11:30.218 "driver_specific": { 00:11:30.218 "raid": { 00:11:30.218 "uuid": "e8853537-ae33-4f9e-bd9c-56f2a8691a6f", 00:11:30.218 "strip_size_kb": 0, 00:11:30.218 "state": "online", 00:11:30.218 "raid_level": "raid1", 00:11:30.218 "superblock": false, 00:11:30.218 "num_base_bdevs": 4, 00:11:30.218 "num_base_bdevs_discovered": 4, 00:11:30.218 "num_base_bdevs_operational": 4, 00:11:30.218 "base_bdevs_list": [ 00:11:30.218 { 00:11:30.218 "name": "BaseBdev1", 00:11:30.218 "uuid": "ec92aa04-fda5-4d73-89b8-58c58448124c", 00:11:30.218 "is_configured": true, 00:11:30.218 "data_offset": 0, 00:11:30.218 "data_size": 65536 00:11:30.218 }, 00:11:30.218 { 00:11:30.218 "name": "BaseBdev2", 00:11:30.218 "uuid": "3abecfb5-fc1e-4642-9612-67c2f4afc08b", 00:11:30.218 "is_configured": true, 00:11:30.218 "data_offset": 0, 00:11:30.218 "data_size": 65536 00:11:30.218 }, 00:11:30.218 { 00:11:30.218 "name": "BaseBdev3", 00:11:30.218 "uuid": "68fe1103-9c48-48e7-a8f1-af469271a1a4", 00:11:30.218 "is_configured": true, 00:11:30.218 "data_offset": 0, 00:11:30.218 "data_size": 65536 00:11:30.218 }, 00:11:30.218 { 00:11:30.218 "name": "BaseBdev4", 00:11:30.218 "uuid": "6c60dcf4-d261-4bb0-8536-9dec55fc1ac6", 00:11:30.218 "is_configured": true, 00:11:30.218 "data_offset": 0, 00:11:30.218 "data_size": 65536 00:11:30.218 } 00:11:30.218 ] 00:11:30.218 } 00:11:30.218 } 00:11:30.218 }' 00:11:30.218 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:30.218 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:30.218 BaseBdev2 00:11:30.218 BaseBdev3 00:11:30.218 BaseBdev4' 00:11:30.218 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.478 02:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.478 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.478 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.478 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.478 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:30.478 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.478 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.478 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.478 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.478 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.478 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.478 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:30.478 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.478 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.478 [2024-10-13 02:25:49.082465] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.478 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.478 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:30.478 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:30.478 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:30.478 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.479 "name": "Existed_Raid", 00:11:30.479 "uuid": "e8853537-ae33-4f9e-bd9c-56f2a8691a6f", 00:11:30.479 "strip_size_kb": 0, 00:11:30.479 "state": "online", 00:11:30.479 "raid_level": "raid1", 00:11:30.479 "superblock": false, 00:11:30.479 "num_base_bdevs": 4, 00:11:30.479 "num_base_bdevs_discovered": 3, 00:11:30.479 "num_base_bdevs_operational": 3, 00:11:30.479 "base_bdevs_list": [ 00:11:30.479 { 00:11:30.479 "name": null, 00:11:30.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.479 "is_configured": false, 00:11:30.479 "data_offset": 0, 00:11:30.479 "data_size": 65536 00:11:30.479 }, 00:11:30.479 { 00:11:30.479 "name": "BaseBdev2", 00:11:30.479 "uuid": "3abecfb5-fc1e-4642-9612-67c2f4afc08b", 00:11:30.479 "is_configured": true, 00:11:30.479 "data_offset": 0, 00:11:30.479 "data_size": 65536 00:11:30.479 }, 00:11:30.479 { 00:11:30.479 "name": "BaseBdev3", 00:11:30.479 "uuid": "68fe1103-9c48-48e7-a8f1-af469271a1a4", 00:11:30.479 "is_configured": true, 00:11:30.479 "data_offset": 0, 00:11:30.479 "data_size": 65536 00:11:30.479 }, 00:11:30.479 { 00:11:30.479 "name": "BaseBdev4", 00:11:30.479 "uuid": "6c60dcf4-d261-4bb0-8536-9dec55fc1ac6", 00:11:30.479 "is_configured": true, 00:11:30.479 "data_offset": 0, 00:11:30.479 "data_size": 65536 00:11:30.479 } 00:11:30.479 ] 00:11:30.479 }' 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.479 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.048 [2024-10-13 02:25:49.622572] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.048 [2024-10-13 02:25:49.699207] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.048 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.308 [2024-10-13 02:25:49.779879] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:31.308 [2024-10-13 02:25:49.779991] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:31.308 [2024-10-13 02:25:49.801213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:31.308 [2024-10-13 02:25:49.801272] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:31.308 [2024-10-13 02:25:49.801286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.308 BaseBdev2 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.308 [ 00:11:31.308 { 00:11:31.308 "name": "BaseBdev2", 00:11:31.308 "aliases": [ 00:11:31.308 "39717675-9d04-42fe-8b02-4c0aaa0abe31" 00:11:31.308 ], 00:11:31.308 "product_name": "Malloc disk", 00:11:31.308 "block_size": 512, 00:11:31.308 "num_blocks": 65536, 00:11:31.308 "uuid": "39717675-9d04-42fe-8b02-4c0aaa0abe31", 00:11:31.308 "assigned_rate_limits": { 00:11:31.308 "rw_ios_per_sec": 0, 00:11:31.308 "rw_mbytes_per_sec": 0, 00:11:31.308 "r_mbytes_per_sec": 0, 00:11:31.308 "w_mbytes_per_sec": 0 00:11:31.308 }, 00:11:31.308 "claimed": false, 00:11:31.308 "zoned": false, 00:11:31.308 "supported_io_types": { 00:11:31.308 "read": true, 00:11:31.308 "write": true, 00:11:31.308 "unmap": true, 00:11:31.308 "flush": true, 00:11:31.308 "reset": true, 00:11:31.308 "nvme_admin": false, 00:11:31.308 "nvme_io": false, 00:11:31.308 "nvme_io_md": false, 00:11:31.308 "write_zeroes": true, 00:11:31.308 "zcopy": true, 00:11:31.308 "get_zone_info": false, 00:11:31.308 "zone_management": false, 00:11:31.308 "zone_append": false, 00:11:31.308 "compare": false, 00:11:31.308 "compare_and_write": false, 00:11:31.308 "abort": true, 00:11:31.308 "seek_hole": false, 00:11:31.308 "seek_data": false, 00:11:31.308 "copy": true, 00:11:31.308 "nvme_iov_md": false 00:11:31.308 }, 00:11:31.308 "memory_domains": [ 00:11:31.308 { 00:11:31.308 "dma_device_id": "system", 00:11:31.308 "dma_device_type": 1 00:11:31.308 }, 00:11:31.308 { 00:11:31.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.308 "dma_device_type": 2 00:11:31.308 } 00:11:31.308 ], 00:11:31.308 "driver_specific": {} 00:11:31.308 } 00:11:31.308 ] 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.308 BaseBdev3 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.308 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.309 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.309 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:31.309 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.309 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.309 [ 00:11:31.309 { 00:11:31.309 "name": "BaseBdev3", 00:11:31.309 "aliases": [ 00:11:31.309 "7a37c087-bb52-45ac-a53a-25e3a054afac" 00:11:31.309 ], 00:11:31.309 "product_name": "Malloc disk", 00:11:31.309 "block_size": 512, 00:11:31.309 "num_blocks": 65536, 00:11:31.309 "uuid": "7a37c087-bb52-45ac-a53a-25e3a054afac", 00:11:31.309 "assigned_rate_limits": { 00:11:31.309 "rw_ios_per_sec": 0, 00:11:31.309 "rw_mbytes_per_sec": 0, 00:11:31.309 "r_mbytes_per_sec": 0, 00:11:31.309 "w_mbytes_per_sec": 0 00:11:31.309 }, 00:11:31.309 "claimed": false, 00:11:31.309 "zoned": false, 00:11:31.309 "supported_io_types": { 00:11:31.309 "read": true, 00:11:31.309 "write": true, 00:11:31.309 "unmap": true, 00:11:31.309 "flush": true, 00:11:31.309 "reset": true, 00:11:31.309 "nvme_admin": false, 00:11:31.309 "nvme_io": false, 00:11:31.309 "nvme_io_md": false, 00:11:31.309 "write_zeroes": true, 00:11:31.309 "zcopy": true, 00:11:31.309 "get_zone_info": false, 00:11:31.309 "zone_management": false, 00:11:31.309 "zone_append": false, 00:11:31.309 "compare": false, 00:11:31.309 "compare_and_write": false, 00:11:31.309 "abort": true, 00:11:31.309 "seek_hole": false, 00:11:31.309 "seek_data": false, 00:11:31.309 "copy": true, 00:11:31.309 "nvme_iov_md": false 00:11:31.309 }, 00:11:31.309 "memory_domains": [ 00:11:31.309 { 00:11:31.309 "dma_device_id": "system", 00:11:31.309 "dma_device_type": 1 00:11:31.309 }, 00:11:31.309 { 00:11:31.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.309 "dma_device_type": 2 00:11:31.309 } 00:11:31.309 ], 00:11:31.309 "driver_specific": {} 00:11:31.309 } 00:11:31.309 ] 00:11:31.309 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.309 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:31.309 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.309 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.309 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:31.309 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.309 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.569 BaseBdev4 00:11:31.569 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.569 02:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:31.569 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:31.569 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:31.569 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:31.569 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:31.569 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:31.569 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:31.569 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.569 02:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.569 [ 00:11:31.569 { 00:11:31.569 "name": "BaseBdev4", 00:11:31.569 "aliases": [ 00:11:31.569 "ccb6c5fc-1e0a-426e-8868-8c5cd19283bb" 00:11:31.569 ], 00:11:31.569 "product_name": "Malloc disk", 00:11:31.569 "block_size": 512, 00:11:31.569 "num_blocks": 65536, 00:11:31.569 "uuid": "ccb6c5fc-1e0a-426e-8868-8c5cd19283bb", 00:11:31.569 "assigned_rate_limits": { 00:11:31.569 "rw_ios_per_sec": 0, 00:11:31.569 "rw_mbytes_per_sec": 0, 00:11:31.569 "r_mbytes_per_sec": 0, 00:11:31.569 "w_mbytes_per_sec": 0 00:11:31.569 }, 00:11:31.569 "claimed": false, 00:11:31.569 "zoned": false, 00:11:31.569 "supported_io_types": { 00:11:31.569 "read": true, 00:11:31.569 "write": true, 00:11:31.569 "unmap": true, 00:11:31.569 "flush": true, 00:11:31.569 "reset": true, 00:11:31.569 "nvme_admin": false, 00:11:31.569 "nvme_io": false, 00:11:31.569 "nvme_io_md": false, 00:11:31.569 "write_zeroes": true, 00:11:31.569 "zcopy": true, 00:11:31.569 "get_zone_info": false, 00:11:31.569 "zone_management": false, 00:11:31.569 "zone_append": false, 00:11:31.569 "compare": false, 00:11:31.569 "compare_and_write": false, 00:11:31.569 "abort": true, 00:11:31.569 "seek_hole": false, 00:11:31.569 "seek_data": false, 00:11:31.569 "copy": true, 00:11:31.569 "nvme_iov_md": false 00:11:31.569 }, 00:11:31.569 "memory_domains": [ 00:11:31.569 { 00:11:31.569 "dma_device_id": "system", 00:11:31.569 "dma_device_type": 1 00:11:31.569 }, 00:11:31.569 { 00:11:31.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.569 "dma_device_type": 2 00:11:31.569 } 00:11:31.569 ], 00:11:31.569 "driver_specific": {} 00:11:31.569 } 00:11:31.569 ] 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.569 [2024-10-13 02:25:50.042265] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.569 [2024-10-13 02:25:50.042371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.569 [2024-10-13 02:25:50.042413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.569 [2024-10-13 02:25:50.044605] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.569 [2024-10-13 02:25:50.044705] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.569 "name": "Existed_Raid", 00:11:31.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.569 "strip_size_kb": 0, 00:11:31.569 "state": "configuring", 00:11:31.569 "raid_level": "raid1", 00:11:31.569 "superblock": false, 00:11:31.569 "num_base_bdevs": 4, 00:11:31.569 "num_base_bdevs_discovered": 3, 00:11:31.569 "num_base_bdevs_operational": 4, 00:11:31.569 "base_bdevs_list": [ 00:11:31.569 { 00:11:31.569 "name": "BaseBdev1", 00:11:31.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.569 "is_configured": false, 00:11:31.569 "data_offset": 0, 00:11:31.569 "data_size": 0 00:11:31.569 }, 00:11:31.569 { 00:11:31.569 "name": "BaseBdev2", 00:11:31.569 "uuid": "39717675-9d04-42fe-8b02-4c0aaa0abe31", 00:11:31.569 "is_configured": true, 00:11:31.569 "data_offset": 0, 00:11:31.569 "data_size": 65536 00:11:31.569 }, 00:11:31.569 { 00:11:31.569 "name": "BaseBdev3", 00:11:31.569 "uuid": "7a37c087-bb52-45ac-a53a-25e3a054afac", 00:11:31.569 "is_configured": true, 00:11:31.569 "data_offset": 0, 00:11:31.569 "data_size": 65536 00:11:31.569 }, 00:11:31.569 { 00:11:31.569 "name": "BaseBdev4", 00:11:31.569 "uuid": "ccb6c5fc-1e0a-426e-8868-8c5cd19283bb", 00:11:31.569 "is_configured": true, 00:11:31.569 "data_offset": 0, 00:11:31.569 "data_size": 65536 00:11:31.569 } 00:11:31.569 ] 00:11:31.569 }' 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.569 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.829 [2024-10-13 02:25:50.441638] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.829 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.829 "name": "Existed_Raid", 00:11:31.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.829 "strip_size_kb": 0, 00:11:31.829 "state": "configuring", 00:11:31.830 "raid_level": "raid1", 00:11:31.830 "superblock": false, 00:11:31.830 "num_base_bdevs": 4, 00:11:31.830 "num_base_bdevs_discovered": 2, 00:11:31.830 "num_base_bdevs_operational": 4, 00:11:31.830 "base_bdevs_list": [ 00:11:31.830 { 00:11:31.830 "name": "BaseBdev1", 00:11:31.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.830 "is_configured": false, 00:11:31.830 "data_offset": 0, 00:11:31.830 "data_size": 0 00:11:31.830 }, 00:11:31.830 { 00:11:31.830 "name": null, 00:11:31.830 "uuid": "39717675-9d04-42fe-8b02-4c0aaa0abe31", 00:11:31.830 "is_configured": false, 00:11:31.830 "data_offset": 0, 00:11:31.830 "data_size": 65536 00:11:31.830 }, 00:11:31.830 { 00:11:31.830 "name": "BaseBdev3", 00:11:31.830 "uuid": "7a37c087-bb52-45ac-a53a-25e3a054afac", 00:11:31.830 "is_configured": true, 00:11:31.830 "data_offset": 0, 00:11:31.830 "data_size": 65536 00:11:31.830 }, 00:11:31.830 { 00:11:31.830 "name": "BaseBdev4", 00:11:31.830 "uuid": "ccb6c5fc-1e0a-426e-8868-8c5cd19283bb", 00:11:31.830 "is_configured": true, 00:11:31.830 "data_offset": 0, 00:11:31.830 "data_size": 65536 00:11:31.830 } 00:11:31.830 ] 00:11:31.830 }' 00:11:31.830 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.830 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.400 [2024-10-13 02:25:50.909564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.400 BaseBdev1 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.400 [ 00:11:32.400 { 00:11:32.400 "name": "BaseBdev1", 00:11:32.400 "aliases": [ 00:11:32.400 "d654044a-e4c6-4ee2-af4e-aee13edc7ca8" 00:11:32.400 ], 00:11:32.400 "product_name": "Malloc disk", 00:11:32.400 "block_size": 512, 00:11:32.400 "num_blocks": 65536, 00:11:32.400 "uuid": "d654044a-e4c6-4ee2-af4e-aee13edc7ca8", 00:11:32.400 "assigned_rate_limits": { 00:11:32.400 "rw_ios_per_sec": 0, 00:11:32.400 "rw_mbytes_per_sec": 0, 00:11:32.400 "r_mbytes_per_sec": 0, 00:11:32.400 "w_mbytes_per_sec": 0 00:11:32.400 }, 00:11:32.400 "claimed": true, 00:11:32.400 "claim_type": "exclusive_write", 00:11:32.400 "zoned": false, 00:11:32.400 "supported_io_types": { 00:11:32.400 "read": true, 00:11:32.400 "write": true, 00:11:32.400 "unmap": true, 00:11:32.400 "flush": true, 00:11:32.400 "reset": true, 00:11:32.400 "nvme_admin": false, 00:11:32.400 "nvme_io": false, 00:11:32.400 "nvme_io_md": false, 00:11:32.400 "write_zeroes": true, 00:11:32.400 "zcopy": true, 00:11:32.400 "get_zone_info": false, 00:11:32.400 "zone_management": false, 00:11:32.400 "zone_append": false, 00:11:32.400 "compare": false, 00:11:32.400 "compare_and_write": false, 00:11:32.400 "abort": true, 00:11:32.400 "seek_hole": false, 00:11:32.400 "seek_data": false, 00:11:32.400 "copy": true, 00:11:32.400 "nvme_iov_md": false 00:11:32.400 }, 00:11:32.400 "memory_domains": [ 00:11:32.400 { 00:11:32.400 "dma_device_id": "system", 00:11:32.400 "dma_device_type": 1 00:11:32.400 }, 00:11:32.400 { 00:11:32.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.400 "dma_device_type": 2 00:11:32.400 } 00:11:32.400 ], 00:11:32.400 "driver_specific": {} 00:11:32.400 } 00:11:32.400 ] 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.400 02:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.400 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.400 "name": "Existed_Raid", 00:11:32.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.400 "strip_size_kb": 0, 00:11:32.400 "state": "configuring", 00:11:32.400 "raid_level": "raid1", 00:11:32.400 "superblock": false, 00:11:32.400 "num_base_bdevs": 4, 00:11:32.400 "num_base_bdevs_discovered": 3, 00:11:32.400 "num_base_bdevs_operational": 4, 00:11:32.400 "base_bdevs_list": [ 00:11:32.400 { 00:11:32.400 "name": "BaseBdev1", 00:11:32.400 "uuid": "d654044a-e4c6-4ee2-af4e-aee13edc7ca8", 00:11:32.400 "is_configured": true, 00:11:32.400 "data_offset": 0, 00:11:32.400 "data_size": 65536 00:11:32.400 }, 00:11:32.400 { 00:11:32.400 "name": null, 00:11:32.400 "uuid": "39717675-9d04-42fe-8b02-4c0aaa0abe31", 00:11:32.400 "is_configured": false, 00:11:32.400 "data_offset": 0, 00:11:32.400 "data_size": 65536 00:11:32.400 }, 00:11:32.400 { 00:11:32.400 "name": "BaseBdev3", 00:11:32.400 "uuid": "7a37c087-bb52-45ac-a53a-25e3a054afac", 00:11:32.400 "is_configured": true, 00:11:32.400 "data_offset": 0, 00:11:32.400 "data_size": 65536 00:11:32.400 }, 00:11:32.400 { 00:11:32.401 "name": "BaseBdev4", 00:11:32.401 "uuid": "ccb6c5fc-1e0a-426e-8868-8c5cd19283bb", 00:11:32.401 "is_configured": true, 00:11:32.401 "data_offset": 0, 00:11:32.401 "data_size": 65536 00:11:32.401 } 00:11:32.401 ] 00:11:32.401 }' 00:11:32.401 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.401 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.662 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.662 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.662 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.662 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:32.662 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.932 [2024-10-13 02:25:51.384814] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.932 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.932 "name": "Existed_Raid", 00:11:32.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.932 "strip_size_kb": 0, 00:11:32.932 "state": "configuring", 00:11:32.932 "raid_level": "raid1", 00:11:32.932 "superblock": false, 00:11:32.932 "num_base_bdevs": 4, 00:11:32.932 "num_base_bdevs_discovered": 2, 00:11:32.932 "num_base_bdevs_operational": 4, 00:11:32.932 "base_bdevs_list": [ 00:11:32.932 { 00:11:32.932 "name": "BaseBdev1", 00:11:32.932 "uuid": "d654044a-e4c6-4ee2-af4e-aee13edc7ca8", 00:11:32.932 "is_configured": true, 00:11:32.932 "data_offset": 0, 00:11:32.932 "data_size": 65536 00:11:32.932 }, 00:11:32.932 { 00:11:32.932 "name": null, 00:11:32.932 "uuid": "39717675-9d04-42fe-8b02-4c0aaa0abe31", 00:11:32.932 "is_configured": false, 00:11:32.932 "data_offset": 0, 00:11:32.932 "data_size": 65536 00:11:32.932 }, 00:11:32.932 { 00:11:32.932 "name": null, 00:11:32.933 "uuid": "7a37c087-bb52-45ac-a53a-25e3a054afac", 00:11:32.933 "is_configured": false, 00:11:32.933 "data_offset": 0, 00:11:32.933 "data_size": 65536 00:11:32.933 }, 00:11:32.933 { 00:11:32.933 "name": "BaseBdev4", 00:11:32.933 "uuid": "ccb6c5fc-1e0a-426e-8868-8c5cd19283bb", 00:11:32.933 "is_configured": true, 00:11:32.933 "data_offset": 0, 00:11:32.933 "data_size": 65536 00:11:32.933 } 00:11:32.933 ] 00:11:32.933 }' 00:11:32.933 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.933 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.193 [2024-10-13 02:25:51.860033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.193 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.454 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.454 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.454 "name": "Existed_Raid", 00:11:33.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.454 "strip_size_kb": 0, 00:11:33.454 "state": "configuring", 00:11:33.454 "raid_level": "raid1", 00:11:33.454 "superblock": false, 00:11:33.454 "num_base_bdevs": 4, 00:11:33.454 "num_base_bdevs_discovered": 3, 00:11:33.454 "num_base_bdevs_operational": 4, 00:11:33.454 "base_bdevs_list": [ 00:11:33.454 { 00:11:33.454 "name": "BaseBdev1", 00:11:33.454 "uuid": "d654044a-e4c6-4ee2-af4e-aee13edc7ca8", 00:11:33.454 "is_configured": true, 00:11:33.454 "data_offset": 0, 00:11:33.454 "data_size": 65536 00:11:33.454 }, 00:11:33.454 { 00:11:33.454 "name": null, 00:11:33.454 "uuid": "39717675-9d04-42fe-8b02-4c0aaa0abe31", 00:11:33.454 "is_configured": false, 00:11:33.454 "data_offset": 0, 00:11:33.454 "data_size": 65536 00:11:33.454 }, 00:11:33.454 { 00:11:33.454 "name": "BaseBdev3", 00:11:33.454 "uuid": "7a37c087-bb52-45ac-a53a-25e3a054afac", 00:11:33.454 "is_configured": true, 00:11:33.454 "data_offset": 0, 00:11:33.454 "data_size": 65536 00:11:33.454 }, 00:11:33.454 { 00:11:33.454 "name": "BaseBdev4", 00:11:33.454 "uuid": "ccb6c5fc-1e0a-426e-8868-8c5cd19283bb", 00:11:33.454 "is_configured": true, 00:11:33.454 "data_offset": 0, 00:11:33.454 "data_size": 65536 00:11:33.454 } 00:11:33.454 ] 00:11:33.454 }' 00:11:33.454 02:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.454 02:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.714 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.714 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:33.714 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.714 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.714 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.714 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:33.714 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:33.714 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.714 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.714 [2024-10-13 02:25:52.391125] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.974 "name": "Existed_Raid", 00:11:33.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.974 "strip_size_kb": 0, 00:11:33.974 "state": "configuring", 00:11:33.974 "raid_level": "raid1", 00:11:33.974 "superblock": false, 00:11:33.974 "num_base_bdevs": 4, 00:11:33.974 "num_base_bdevs_discovered": 2, 00:11:33.974 "num_base_bdevs_operational": 4, 00:11:33.974 "base_bdevs_list": [ 00:11:33.974 { 00:11:33.974 "name": null, 00:11:33.974 "uuid": "d654044a-e4c6-4ee2-af4e-aee13edc7ca8", 00:11:33.974 "is_configured": false, 00:11:33.974 "data_offset": 0, 00:11:33.974 "data_size": 65536 00:11:33.974 }, 00:11:33.974 { 00:11:33.974 "name": null, 00:11:33.974 "uuid": "39717675-9d04-42fe-8b02-4c0aaa0abe31", 00:11:33.974 "is_configured": false, 00:11:33.974 "data_offset": 0, 00:11:33.974 "data_size": 65536 00:11:33.974 }, 00:11:33.974 { 00:11:33.974 "name": "BaseBdev3", 00:11:33.974 "uuid": "7a37c087-bb52-45ac-a53a-25e3a054afac", 00:11:33.974 "is_configured": true, 00:11:33.974 "data_offset": 0, 00:11:33.974 "data_size": 65536 00:11:33.974 }, 00:11:33.974 { 00:11:33.974 "name": "BaseBdev4", 00:11:33.974 "uuid": "ccb6c5fc-1e0a-426e-8868-8c5cd19283bb", 00:11:33.974 "is_configured": true, 00:11:33.974 "data_offset": 0, 00:11:33.974 "data_size": 65536 00:11:33.974 } 00:11:33.974 ] 00:11:33.974 }' 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.974 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.235 [2024-10-13 02:25:52.866425] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.235 "name": "Existed_Raid", 00:11:34.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.235 "strip_size_kb": 0, 00:11:34.235 "state": "configuring", 00:11:34.235 "raid_level": "raid1", 00:11:34.235 "superblock": false, 00:11:34.235 "num_base_bdevs": 4, 00:11:34.235 "num_base_bdevs_discovered": 3, 00:11:34.235 "num_base_bdevs_operational": 4, 00:11:34.235 "base_bdevs_list": [ 00:11:34.235 { 00:11:34.235 "name": null, 00:11:34.235 "uuid": "d654044a-e4c6-4ee2-af4e-aee13edc7ca8", 00:11:34.235 "is_configured": false, 00:11:34.235 "data_offset": 0, 00:11:34.235 "data_size": 65536 00:11:34.235 }, 00:11:34.235 { 00:11:34.235 "name": "BaseBdev2", 00:11:34.235 "uuid": "39717675-9d04-42fe-8b02-4c0aaa0abe31", 00:11:34.235 "is_configured": true, 00:11:34.235 "data_offset": 0, 00:11:34.235 "data_size": 65536 00:11:34.235 }, 00:11:34.235 { 00:11:34.235 "name": "BaseBdev3", 00:11:34.235 "uuid": "7a37c087-bb52-45ac-a53a-25e3a054afac", 00:11:34.235 "is_configured": true, 00:11:34.235 "data_offset": 0, 00:11:34.235 "data_size": 65536 00:11:34.235 }, 00:11:34.235 { 00:11:34.235 "name": "BaseBdev4", 00:11:34.235 "uuid": "ccb6c5fc-1e0a-426e-8868-8c5cd19283bb", 00:11:34.235 "is_configured": true, 00:11:34.235 "data_offset": 0, 00:11:34.235 "data_size": 65536 00:11:34.235 } 00:11:34.235 ] 00:11:34.235 }' 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.235 02:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d654044a-e4c6-4ee2-af4e-aee13edc7ca8 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.806 [2024-10-13 02:25:53.394362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:34.806 [2024-10-13 02:25:53.394409] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:11:34.806 [2024-10-13 02:25:53.394419] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:34.806 [2024-10-13 02:25:53.394738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:11:34.806 [2024-10-13 02:25:53.394912] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:11:34.806 [2024-10-13 02:25:53.394924] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:11:34.806 [2024-10-13 02:25:53.395124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.806 NewBaseBdev 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.806 [ 00:11:34.806 { 00:11:34.806 "name": "NewBaseBdev", 00:11:34.806 "aliases": [ 00:11:34.806 "d654044a-e4c6-4ee2-af4e-aee13edc7ca8" 00:11:34.806 ], 00:11:34.806 "product_name": "Malloc disk", 00:11:34.806 "block_size": 512, 00:11:34.806 "num_blocks": 65536, 00:11:34.806 "uuid": "d654044a-e4c6-4ee2-af4e-aee13edc7ca8", 00:11:34.806 "assigned_rate_limits": { 00:11:34.806 "rw_ios_per_sec": 0, 00:11:34.806 "rw_mbytes_per_sec": 0, 00:11:34.806 "r_mbytes_per_sec": 0, 00:11:34.806 "w_mbytes_per_sec": 0 00:11:34.806 }, 00:11:34.806 "claimed": true, 00:11:34.806 "claim_type": "exclusive_write", 00:11:34.806 "zoned": false, 00:11:34.806 "supported_io_types": { 00:11:34.806 "read": true, 00:11:34.806 "write": true, 00:11:34.806 "unmap": true, 00:11:34.806 "flush": true, 00:11:34.806 "reset": true, 00:11:34.806 "nvme_admin": false, 00:11:34.806 "nvme_io": false, 00:11:34.806 "nvme_io_md": false, 00:11:34.806 "write_zeroes": true, 00:11:34.806 "zcopy": true, 00:11:34.806 "get_zone_info": false, 00:11:34.806 "zone_management": false, 00:11:34.806 "zone_append": false, 00:11:34.806 "compare": false, 00:11:34.806 "compare_and_write": false, 00:11:34.806 "abort": true, 00:11:34.806 "seek_hole": false, 00:11:34.806 "seek_data": false, 00:11:34.806 "copy": true, 00:11:34.806 "nvme_iov_md": false 00:11:34.806 }, 00:11:34.806 "memory_domains": [ 00:11:34.806 { 00:11:34.806 "dma_device_id": "system", 00:11:34.806 "dma_device_type": 1 00:11:34.806 }, 00:11:34.806 { 00:11:34.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.806 "dma_device_type": 2 00:11:34.806 } 00:11:34.806 ], 00:11:34.806 "driver_specific": {} 00:11:34.806 } 00:11:34.806 ] 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.806 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.806 "name": "Existed_Raid", 00:11:34.806 "uuid": "64a06b58-28fd-408f-84b7-6bba791e7a5e", 00:11:34.806 "strip_size_kb": 0, 00:11:34.806 "state": "online", 00:11:34.806 "raid_level": "raid1", 00:11:34.806 "superblock": false, 00:11:34.807 "num_base_bdevs": 4, 00:11:34.807 "num_base_bdevs_discovered": 4, 00:11:34.807 "num_base_bdevs_operational": 4, 00:11:34.807 "base_bdevs_list": [ 00:11:34.807 { 00:11:34.807 "name": "NewBaseBdev", 00:11:34.807 "uuid": "d654044a-e4c6-4ee2-af4e-aee13edc7ca8", 00:11:34.807 "is_configured": true, 00:11:34.807 "data_offset": 0, 00:11:34.807 "data_size": 65536 00:11:34.807 }, 00:11:34.807 { 00:11:34.807 "name": "BaseBdev2", 00:11:34.807 "uuid": "39717675-9d04-42fe-8b02-4c0aaa0abe31", 00:11:34.807 "is_configured": true, 00:11:34.807 "data_offset": 0, 00:11:34.807 "data_size": 65536 00:11:34.807 }, 00:11:34.807 { 00:11:34.807 "name": "BaseBdev3", 00:11:34.807 "uuid": "7a37c087-bb52-45ac-a53a-25e3a054afac", 00:11:34.807 "is_configured": true, 00:11:34.807 "data_offset": 0, 00:11:34.807 "data_size": 65536 00:11:34.807 }, 00:11:34.807 { 00:11:34.807 "name": "BaseBdev4", 00:11:34.807 "uuid": "ccb6c5fc-1e0a-426e-8868-8c5cd19283bb", 00:11:34.807 "is_configured": true, 00:11:34.807 "data_offset": 0, 00:11:34.807 "data_size": 65536 00:11:34.807 } 00:11:34.807 ] 00:11:34.807 }' 00:11:34.807 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.807 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.377 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:35.377 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:35.377 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:35.377 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:35.377 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:35.377 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:35.377 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:35.377 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.377 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:35.377 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.377 [2024-10-13 02:25:53.913841] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.377 02:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.377 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:35.377 "name": "Existed_Raid", 00:11:35.377 "aliases": [ 00:11:35.377 "64a06b58-28fd-408f-84b7-6bba791e7a5e" 00:11:35.377 ], 00:11:35.377 "product_name": "Raid Volume", 00:11:35.377 "block_size": 512, 00:11:35.377 "num_blocks": 65536, 00:11:35.377 "uuid": "64a06b58-28fd-408f-84b7-6bba791e7a5e", 00:11:35.377 "assigned_rate_limits": { 00:11:35.377 "rw_ios_per_sec": 0, 00:11:35.377 "rw_mbytes_per_sec": 0, 00:11:35.377 "r_mbytes_per_sec": 0, 00:11:35.377 "w_mbytes_per_sec": 0 00:11:35.377 }, 00:11:35.377 "claimed": false, 00:11:35.377 "zoned": false, 00:11:35.377 "supported_io_types": { 00:11:35.377 "read": true, 00:11:35.377 "write": true, 00:11:35.377 "unmap": false, 00:11:35.377 "flush": false, 00:11:35.377 "reset": true, 00:11:35.377 "nvme_admin": false, 00:11:35.377 "nvme_io": false, 00:11:35.377 "nvme_io_md": false, 00:11:35.377 "write_zeroes": true, 00:11:35.377 "zcopy": false, 00:11:35.377 "get_zone_info": false, 00:11:35.377 "zone_management": false, 00:11:35.377 "zone_append": false, 00:11:35.377 "compare": false, 00:11:35.377 "compare_and_write": false, 00:11:35.377 "abort": false, 00:11:35.377 "seek_hole": false, 00:11:35.377 "seek_data": false, 00:11:35.377 "copy": false, 00:11:35.377 "nvme_iov_md": false 00:11:35.377 }, 00:11:35.377 "memory_domains": [ 00:11:35.377 { 00:11:35.377 "dma_device_id": "system", 00:11:35.377 "dma_device_type": 1 00:11:35.377 }, 00:11:35.377 { 00:11:35.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.377 "dma_device_type": 2 00:11:35.377 }, 00:11:35.377 { 00:11:35.377 "dma_device_id": "system", 00:11:35.377 "dma_device_type": 1 00:11:35.377 }, 00:11:35.377 { 00:11:35.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.377 "dma_device_type": 2 00:11:35.377 }, 00:11:35.377 { 00:11:35.377 "dma_device_id": "system", 00:11:35.377 "dma_device_type": 1 00:11:35.377 }, 00:11:35.377 { 00:11:35.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.377 "dma_device_type": 2 00:11:35.377 }, 00:11:35.377 { 00:11:35.377 "dma_device_id": "system", 00:11:35.377 "dma_device_type": 1 00:11:35.377 }, 00:11:35.377 { 00:11:35.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.377 "dma_device_type": 2 00:11:35.377 } 00:11:35.377 ], 00:11:35.377 "driver_specific": { 00:11:35.377 "raid": { 00:11:35.377 "uuid": "64a06b58-28fd-408f-84b7-6bba791e7a5e", 00:11:35.377 "strip_size_kb": 0, 00:11:35.377 "state": "online", 00:11:35.377 "raid_level": "raid1", 00:11:35.377 "superblock": false, 00:11:35.377 "num_base_bdevs": 4, 00:11:35.378 "num_base_bdevs_discovered": 4, 00:11:35.378 "num_base_bdevs_operational": 4, 00:11:35.378 "base_bdevs_list": [ 00:11:35.378 { 00:11:35.378 "name": "NewBaseBdev", 00:11:35.378 "uuid": "d654044a-e4c6-4ee2-af4e-aee13edc7ca8", 00:11:35.378 "is_configured": true, 00:11:35.378 "data_offset": 0, 00:11:35.378 "data_size": 65536 00:11:35.378 }, 00:11:35.378 { 00:11:35.378 "name": "BaseBdev2", 00:11:35.378 "uuid": "39717675-9d04-42fe-8b02-4c0aaa0abe31", 00:11:35.378 "is_configured": true, 00:11:35.378 "data_offset": 0, 00:11:35.378 "data_size": 65536 00:11:35.378 }, 00:11:35.378 { 00:11:35.378 "name": "BaseBdev3", 00:11:35.378 "uuid": "7a37c087-bb52-45ac-a53a-25e3a054afac", 00:11:35.378 "is_configured": true, 00:11:35.378 "data_offset": 0, 00:11:35.378 "data_size": 65536 00:11:35.378 }, 00:11:35.378 { 00:11:35.378 "name": "BaseBdev4", 00:11:35.378 "uuid": "ccb6c5fc-1e0a-426e-8868-8c5cd19283bb", 00:11:35.378 "is_configured": true, 00:11:35.378 "data_offset": 0, 00:11:35.378 "data_size": 65536 00:11:35.378 } 00:11:35.378 ] 00:11:35.378 } 00:11:35.378 } 00:11:35.378 }' 00:11:35.378 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:35.378 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:35.378 BaseBdev2 00:11:35.378 BaseBdev3 00:11:35.378 BaseBdev4' 00:11:35.378 02:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.378 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:35.378 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.378 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:35.378 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.378 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.378 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.378 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.640 [2024-10-13 02:25:54.181024] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.640 [2024-10-13 02:25:54.181058] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.640 [2024-10-13 02:25:54.181156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.640 [2024-10-13 02:25:54.181453] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.640 [2024-10-13 02:25:54.181471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83902 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 83902 ']' 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 83902 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83902 00:11:35.640 killing process with pid 83902 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83902' 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 83902 00:11:35.640 [2024-10-13 02:25:54.219531] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:35.640 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 83902 00:11:35.640 [2024-10-13 02:25:54.296475] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:36.212 00:11:36.212 real 0m9.548s 00:11:36.212 user 0m15.869s 00:11:36.212 sys 0m2.176s 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.212 ************************************ 00:11:36.212 END TEST raid_state_function_test 00:11:36.212 ************************************ 00:11:36.212 02:25:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:36.212 02:25:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:36.212 02:25:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:36.212 02:25:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:36.212 ************************************ 00:11:36.212 START TEST raid_state_function_test_sb 00:11:36.212 ************************************ 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84551 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84551' 00:11:36.212 Process raid pid: 84551 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84551 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84551 ']' 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.212 02:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:36.213 02:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.213 02:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:36.213 02:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.213 [2024-10-13 02:25:54.830710] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:36.213 [2024-10-13 02:25:54.830878] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.474 [2024-10-13 02:25:54.977188] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.474 [2024-10-13 02:25:55.051240] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.474 [2024-10-13 02:25:55.127782] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.474 [2024-10-13 02:25:55.127826] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.045 [2024-10-13 02:25:55.672631] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:37.045 [2024-10-13 02:25:55.672688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:37.045 [2024-10-13 02:25:55.672725] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:37.045 [2024-10-13 02:25:55.672737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:37.045 [2024-10-13 02:25:55.672743] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:37.045 [2024-10-13 02:25:55.672756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:37.045 [2024-10-13 02:25:55.672762] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:37.045 [2024-10-13 02:25:55.672771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.045 02:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.305 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.305 "name": "Existed_Raid", 00:11:37.305 "uuid": "e501955c-155e-4291-a393-3f4b16ab2716", 00:11:37.305 "strip_size_kb": 0, 00:11:37.305 "state": "configuring", 00:11:37.305 "raid_level": "raid1", 00:11:37.305 "superblock": true, 00:11:37.305 "num_base_bdevs": 4, 00:11:37.305 "num_base_bdevs_discovered": 0, 00:11:37.305 "num_base_bdevs_operational": 4, 00:11:37.305 "base_bdevs_list": [ 00:11:37.305 { 00:11:37.305 "name": "BaseBdev1", 00:11:37.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.305 "is_configured": false, 00:11:37.305 "data_offset": 0, 00:11:37.305 "data_size": 0 00:11:37.305 }, 00:11:37.305 { 00:11:37.305 "name": "BaseBdev2", 00:11:37.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.305 "is_configured": false, 00:11:37.305 "data_offset": 0, 00:11:37.305 "data_size": 0 00:11:37.305 }, 00:11:37.305 { 00:11:37.305 "name": "BaseBdev3", 00:11:37.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.305 "is_configured": false, 00:11:37.305 "data_offset": 0, 00:11:37.305 "data_size": 0 00:11:37.305 }, 00:11:37.305 { 00:11:37.305 "name": "BaseBdev4", 00:11:37.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.305 "is_configured": false, 00:11:37.305 "data_offset": 0, 00:11:37.305 "data_size": 0 00:11:37.305 } 00:11:37.305 ] 00:11:37.305 }' 00:11:37.305 02:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.305 02:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.566 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:37.566 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.566 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.566 [2024-10-13 02:25:56.139565] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:37.566 [2024-10-13 02:25:56.139665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:11:37.566 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.566 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:37.566 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.566 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.566 [2024-10-13 02:25:56.151572] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:37.566 [2024-10-13 02:25:56.151652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:37.566 [2024-10-13 02:25:56.151682] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:37.566 [2024-10-13 02:25:56.151705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:37.566 [2024-10-13 02:25:56.151730] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:37.566 [2024-10-13 02:25:56.151757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:37.566 [2024-10-13 02:25:56.151794] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:37.566 [2024-10-13 02:25:56.151817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:37.566 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.566 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.567 [2024-10-13 02:25:56.178673] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:37.567 BaseBdev1 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.567 [ 00:11:37.567 { 00:11:37.567 "name": "BaseBdev1", 00:11:37.567 "aliases": [ 00:11:37.567 "2c22f0f0-2cfa-4ee3-b8df-f01509511d78" 00:11:37.567 ], 00:11:37.567 "product_name": "Malloc disk", 00:11:37.567 "block_size": 512, 00:11:37.567 "num_blocks": 65536, 00:11:37.567 "uuid": "2c22f0f0-2cfa-4ee3-b8df-f01509511d78", 00:11:37.567 "assigned_rate_limits": { 00:11:37.567 "rw_ios_per_sec": 0, 00:11:37.567 "rw_mbytes_per_sec": 0, 00:11:37.567 "r_mbytes_per_sec": 0, 00:11:37.567 "w_mbytes_per_sec": 0 00:11:37.567 }, 00:11:37.567 "claimed": true, 00:11:37.567 "claim_type": "exclusive_write", 00:11:37.567 "zoned": false, 00:11:37.567 "supported_io_types": { 00:11:37.567 "read": true, 00:11:37.567 "write": true, 00:11:37.567 "unmap": true, 00:11:37.567 "flush": true, 00:11:37.567 "reset": true, 00:11:37.567 "nvme_admin": false, 00:11:37.567 "nvme_io": false, 00:11:37.567 "nvme_io_md": false, 00:11:37.567 "write_zeroes": true, 00:11:37.567 "zcopy": true, 00:11:37.567 "get_zone_info": false, 00:11:37.567 "zone_management": false, 00:11:37.567 "zone_append": false, 00:11:37.567 "compare": false, 00:11:37.567 "compare_and_write": false, 00:11:37.567 "abort": true, 00:11:37.567 "seek_hole": false, 00:11:37.567 "seek_data": false, 00:11:37.567 "copy": true, 00:11:37.567 "nvme_iov_md": false 00:11:37.567 }, 00:11:37.567 "memory_domains": [ 00:11:37.567 { 00:11:37.567 "dma_device_id": "system", 00:11:37.567 "dma_device_type": 1 00:11:37.567 }, 00:11:37.567 { 00:11:37.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.567 "dma_device_type": 2 00:11:37.567 } 00:11:37.567 ], 00:11:37.567 "driver_specific": {} 00:11:37.567 } 00:11:37.567 ] 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.567 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.828 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.828 "name": "Existed_Raid", 00:11:37.828 "uuid": "75185567-92c2-4e13-8069-182ec2fd7e5f", 00:11:37.828 "strip_size_kb": 0, 00:11:37.828 "state": "configuring", 00:11:37.828 "raid_level": "raid1", 00:11:37.828 "superblock": true, 00:11:37.828 "num_base_bdevs": 4, 00:11:37.828 "num_base_bdevs_discovered": 1, 00:11:37.828 "num_base_bdevs_operational": 4, 00:11:37.828 "base_bdevs_list": [ 00:11:37.828 { 00:11:37.828 "name": "BaseBdev1", 00:11:37.828 "uuid": "2c22f0f0-2cfa-4ee3-b8df-f01509511d78", 00:11:37.828 "is_configured": true, 00:11:37.828 "data_offset": 2048, 00:11:37.828 "data_size": 63488 00:11:37.828 }, 00:11:37.828 { 00:11:37.828 "name": "BaseBdev2", 00:11:37.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.828 "is_configured": false, 00:11:37.828 "data_offset": 0, 00:11:37.828 "data_size": 0 00:11:37.828 }, 00:11:37.828 { 00:11:37.828 "name": "BaseBdev3", 00:11:37.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.828 "is_configured": false, 00:11:37.828 "data_offset": 0, 00:11:37.828 "data_size": 0 00:11:37.828 }, 00:11:37.828 { 00:11:37.828 "name": "BaseBdev4", 00:11:37.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.828 "is_configured": false, 00:11:37.828 "data_offset": 0, 00:11:37.828 "data_size": 0 00:11:37.828 } 00:11:37.828 ] 00:11:37.828 }' 00:11:37.828 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.828 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.088 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:38.088 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.088 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.088 [2024-10-13 02:25:56.641950] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:38.088 [2024-10-13 02:25:56.642103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:11:38.088 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.089 [2024-10-13 02:25:56.653986] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.089 [2024-10-13 02:25:56.656391] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:38.089 [2024-10-13 02:25:56.656492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:38.089 [2024-10-13 02:25:56.656509] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:38.089 [2024-10-13 02:25:56.656520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:38.089 [2024-10-13 02:25:56.656527] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:38.089 [2024-10-13 02:25:56.656537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.089 "name": "Existed_Raid", 00:11:38.089 "uuid": "fdfbfe1a-2479-473c-966a-6c4e620ed5b9", 00:11:38.089 "strip_size_kb": 0, 00:11:38.089 "state": "configuring", 00:11:38.089 "raid_level": "raid1", 00:11:38.089 "superblock": true, 00:11:38.089 "num_base_bdevs": 4, 00:11:38.089 "num_base_bdevs_discovered": 1, 00:11:38.089 "num_base_bdevs_operational": 4, 00:11:38.089 "base_bdevs_list": [ 00:11:38.089 { 00:11:38.089 "name": "BaseBdev1", 00:11:38.089 "uuid": "2c22f0f0-2cfa-4ee3-b8df-f01509511d78", 00:11:38.089 "is_configured": true, 00:11:38.089 "data_offset": 2048, 00:11:38.089 "data_size": 63488 00:11:38.089 }, 00:11:38.089 { 00:11:38.089 "name": "BaseBdev2", 00:11:38.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.089 "is_configured": false, 00:11:38.089 "data_offset": 0, 00:11:38.089 "data_size": 0 00:11:38.089 }, 00:11:38.089 { 00:11:38.089 "name": "BaseBdev3", 00:11:38.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.089 "is_configured": false, 00:11:38.089 "data_offset": 0, 00:11:38.089 "data_size": 0 00:11:38.089 }, 00:11:38.089 { 00:11:38.089 "name": "BaseBdev4", 00:11:38.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.089 "is_configured": false, 00:11:38.089 "data_offset": 0, 00:11:38.089 "data_size": 0 00:11:38.089 } 00:11:38.089 ] 00:11:38.089 }' 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.089 02:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.660 [2024-10-13 02:25:57.069940] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.660 BaseBdev2 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.660 [ 00:11:38.660 { 00:11:38.660 "name": "BaseBdev2", 00:11:38.660 "aliases": [ 00:11:38.660 "0c938225-028b-4846-96eb-2c93dbef5e3b" 00:11:38.660 ], 00:11:38.660 "product_name": "Malloc disk", 00:11:38.660 "block_size": 512, 00:11:38.660 "num_blocks": 65536, 00:11:38.660 "uuid": "0c938225-028b-4846-96eb-2c93dbef5e3b", 00:11:38.660 "assigned_rate_limits": { 00:11:38.660 "rw_ios_per_sec": 0, 00:11:38.660 "rw_mbytes_per_sec": 0, 00:11:38.660 "r_mbytes_per_sec": 0, 00:11:38.660 "w_mbytes_per_sec": 0 00:11:38.660 }, 00:11:38.660 "claimed": true, 00:11:38.660 "claim_type": "exclusive_write", 00:11:38.660 "zoned": false, 00:11:38.660 "supported_io_types": { 00:11:38.660 "read": true, 00:11:38.660 "write": true, 00:11:38.660 "unmap": true, 00:11:38.660 "flush": true, 00:11:38.660 "reset": true, 00:11:38.660 "nvme_admin": false, 00:11:38.660 "nvme_io": false, 00:11:38.660 "nvme_io_md": false, 00:11:38.660 "write_zeroes": true, 00:11:38.660 "zcopy": true, 00:11:38.660 "get_zone_info": false, 00:11:38.660 "zone_management": false, 00:11:38.660 "zone_append": false, 00:11:38.660 "compare": false, 00:11:38.660 "compare_and_write": false, 00:11:38.660 "abort": true, 00:11:38.660 "seek_hole": false, 00:11:38.660 "seek_data": false, 00:11:38.660 "copy": true, 00:11:38.660 "nvme_iov_md": false 00:11:38.660 }, 00:11:38.660 "memory_domains": [ 00:11:38.660 { 00:11:38.660 "dma_device_id": "system", 00:11:38.660 "dma_device_type": 1 00:11:38.660 }, 00:11:38.660 { 00:11:38.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.660 "dma_device_type": 2 00:11:38.660 } 00:11:38.660 ], 00:11:38.660 "driver_specific": {} 00:11:38.660 } 00:11:38.660 ] 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.660 "name": "Existed_Raid", 00:11:38.660 "uuid": "fdfbfe1a-2479-473c-966a-6c4e620ed5b9", 00:11:38.660 "strip_size_kb": 0, 00:11:38.660 "state": "configuring", 00:11:38.660 "raid_level": "raid1", 00:11:38.660 "superblock": true, 00:11:38.660 "num_base_bdevs": 4, 00:11:38.660 "num_base_bdevs_discovered": 2, 00:11:38.660 "num_base_bdevs_operational": 4, 00:11:38.660 "base_bdevs_list": [ 00:11:38.660 { 00:11:38.660 "name": "BaseBdev1", 00:11:38.660 "uuid": "2c22f0f0-2cfa-4ee3-b8df-f01509511d78", 00:11:38.660 "is_configured": true, 00:11:38.660 "data_offset": 2048, 00:11:38.660 "data_size": 63488 00:11:38.660 }, 00:11:38.660 { 00:11:38.660 "name": "BaseBdev2", 00:11:38.660 "uuid": "0c938225-028b-4846-96eb-2c93dbef5e3b", 00:11:38.660 "is_configured": true, 00:11:38.660 "data_offset": 2048, 00:11:38.660 "data_size": 63488 00:11:38.660 }, 00:11:38.660 { 00:11:38.660 "name": "BaseBdev3", 00:11:38.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.660 "is_configured": false, 00:11:38.660 "data_offset": 0, 00:11:38.660 "data_size": 0 00:11:38.660 }, 00:11:38.660 { 00:11:38.660 "name": "BaseBdev4", 00:11:38.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.660 "is_configured": false, 00:11:38.660 "data_offset": 0, 00:11:38.660 "data_size": 0 00:11:38.660 } 00:11:38.660 ] 00:11:38.660 }' 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.660 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.921 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:38.921 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.921 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.921 [2024-10-13 02:25:57.561903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.921 BaseBdev3 00:11:38.921 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.921 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:38.921 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:38.921 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:38.921 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:38.921 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:38.921 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:38.921 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:38.921 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.921 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.921 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.921 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:38.921 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.921 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.921 [ 00:11:38.921 { 00:11:38.921 "name": "BaseBdev3", 00:11:38.921 "aliases": [ 00:11:38.921 "9f750fd5-025f-4f97-a8f1-6f2396f4d674" 00:11:38.921 ], 00:11:38.921 "product_name": "Malloc disk", 00:11:38.921 "block_size": 512, 00:11:38.921 "num_blocks": 65536, 00:11:38.921 "uuid": "9f750fd5-025f-4f97-a8f1-6f2396f4d674", 00:11:38.921 "assigned_rate_limits": { 00:11:38.921 "rw_ios_per_sec": 0, 00:11:38.921 "rw_mbytes_per_sec": 0, 00:11:38.921 "r_mbytes_per_sec": 0, 00:11:38.921 "w_mbytes_per_sec": 0 00:11:38.921 }, 00:11:38.921 "claimed": true, 00:11:38.921 "claim_type": "exclusive_write", 00:11:38.921 "zoned": false, 00:11:38.921 "supported_io_types": { 00:11:38.921 "read": true, 00:11:38.921 "write": true, 00:11:38.921 "unmap": true, 00:11:38.921 "flush": true, 00:11:38.921 "reset": true, 00:11:38.921 "nvme_admin": false, 00:11:38.921 "nvme_io": false, 00:11:38.921 "nvme_io_md": false, 00:11:38.921 "write_zeroes": true, 00:11:38.921 "zcopy": true, 00:11:38.921 "get_zone_info": false, 00:11:38.921 "zone_management": false, 00:11:38.921 "zone_append": false, 00:11:38.921 "compare": false, 00:11:38.921 "compare_and_write": false, 00:11:38.921 "abort": true, 00:11:38.921 "seek_hole": false, 00:11:38.921 "seek_data": false, 00:11:38.921 "copy": true, 00:11:38.921 "nvme_iov_md": false 00:11:38.921 }, 00:11:38.921 "memory_domains": [ 00:11:38.921 { 00:11:38.921 "dma_device_id": "system", 00:11:38.921 "dma_device_type": 1 00:11:38.921 }, 00:11:39.182 { 00:11:39.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.182 "dma_device_type": 2 00:11:39.182 } 00:11:39.182 ], 00:11:39.182 "driver_specific": {} 00:11:39.182 } 00:11:39.182 ] 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.182 "name": "Existed_Raid", 00:11:39.182 "uuid": "fdfbfe1a-2479-473c-966a-6c4e620ed5b9", 00:11:39.182 "strip_size_kb": 0, 00:11:39.182 "state": "configuring", 00:11:39.182 "raid_level": "raid1", 00:11:39.182 "superblock": true, 00:11:39.182 "num_base_bdevs": 4, 00:11:39.182 "num_base_bdevs_discovered": 3, 00:11:39.182 "num_base_bdevs_operational": 4, 00:11:39.182 "base_bdevs_list": [ 00:11:39.182 { 00:11:39.182 "name": "BaseBdev1", 00:11:39.182 "uuid": "2c22f0f0-2cfa-4ee3-b8df-f01509511d78", 00:11:39.182 "is_configured": true, 00:11:39.182 "data_offset": 2048, 00:11:39.182 "data_size": 63488 00:11:39.182 }, 00:11:39.182 { 00:11:39.182 "name": "BaseBdev2", 00:11:39.182 "uuid": "0c938225-028b-4846-96eb-2c93dbef5e3b", 00:11:39.182 "is_configured": true, 00:11:39.182 "data_offset": 2048, 00:11:39.182 "data_size": 63488 00:11:39.182 }, 00:11:39.182 { 00:11:39.182 "name": "BaseBdev3", 00:11:39.182 "uuid": "9f750fd5-025f-4f97-a8f1-6f2396f4d674", 00:11:39.182 "is_configured": true, 00:11:39.182 "data_offset": 2048, 00:11:39.182 "data_size": 63488 00:11:39.182 }, 00:11:39.182 { 00:11:39.182 "name": "BaseBdev4", 00:11:39.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.182 "is_configured": false, 00:11:39.182 "data_offset": 0, 00:11:39.182 "data_size": 0 00:11:39.182 } 00:11:39.182 ] 00:11:39.182 }' 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.182 02:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.547 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:39.547 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.547 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.547 [2024-10-13 02:25:58.054071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:39.547 [2024-10-13 02:25:58.054427] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:11:39.547 [2024-10-13 02:25:58.054450] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:39.547 [2024-10-13 02:25:58.054813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:11:39.547 [2024-10-13 02:25:58.055027] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:11:39.547 [2024-10-13 02:25:58.055044] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:11:39.547 BaseBdev4 00:11:39.547 [2024-10-13 02:25:58.055187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.547 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.547 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:39.547 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:39.547 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:39.547 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:39.547 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:39.547 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:39.547 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:39.547 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.547 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.547 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.547 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:39.547 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.547 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.547 [ 00:11:39.547 { 00:11:39.547 "name": "BaseBdev4", 00:11:39.547 "aliases": [ 00:11:39.547 "22b811aa-c4b4-4e65-826c-37f5d34728ee" 00:11:39.547 ], 00:11:39.547 "product_name": "Malloc disk", 00:11:39.548 "block_size": 512, 00:11:39.548 "num_blocks": 65536, 00:11:39.548 "uuid": "22b811aa-c4b4-4e65-826c-37f5d34728ee", 00:11:39.548 "assigned_rate_limits": { 00:11:39.548 "rw_ios_per_sec": 0, 00:11:39.548 "rw_mbytes_per_sec": 0, 00:11:39.548 "r_mbytes_per_sec": 0, 00:11:39.548 "w_mbytes_per_sec": 0 00:11:39.548 }, 00:11:39.548 "claimed": true, 00:11:39.548 "claim_type": "exclusive_write", 00:11:39.548 "zoned": false, 00:11:39.548 "supported_io_types": { 00:11:39.548 "read": true, 00:11:39.548 "write": true, 00:11:39.548 "unmap": true, 00:11:39.548 "flush": true, 00:11:39.548 "reset": true, 00:11:39.548 "nvme_admin": false, 00:11:39.548 "nvme_io": false, 00:11:39.548 "nvme_io_md": false, 00:11:39.548 "write_zeroes": true, 00:11:39.548 "zcopy": true, 00:11:39.548 "get_zone_info": false, 00:11:39.548 "zone_management": false, 00:11:39.548 "zone_append": false, 00:11:39.548 "compare": false, 00:11:39.548 "compare_and_write": false, 00:11:39.548 "abort": true, 00:11:39.548 "seek_hole": false, 00:11:39.548 "seek_data": false, 00:11:39.548 "copy": true, 00:11:39.548 "nvme_iov_md": false 00:11:39.548 }, 00:11:39.548 "memory_domains": [ 00:11:39.548 { 00:11:39.548 "dma_device_id": "system", 00:11:39.548 "dma_device_type": 1 00:11:39.548 }, 00:11:39.548 { 00:11:39.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.548 "dma_device_type": 2 00:11:39.548 } 00:11:39.548 ], 00:11:39.548 "driver_specific": {} 00:11:39.548 } 00:11:39.548 ] 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.548 "name": "Existed_Raid", 00:11:39.548 "uuid": "fdfbfe1a-2479-473c-966a-6c4e620ed5b9", 00:11:39.548 "strip_size_kb": 0, 00:11:39.548 "state": "online", 00:11:39.548 "raid_level": "raid1", 00:11:39.548 "superblock": true, 00:11:39.548 "num_base_bdevs": 4, 00:11:39.548 "num_base_bdevs_discovered": 4, 00:11:39.548 "num_base_bdevs_operational": 4, 00:11:39.548 "base_bdevs_list": [ 00:11:39.548 { 00:11:39.548 "name": "BaseBdev1", 00:11:39.548 "uuid": "2c22f0f0-2cfa-4ee3-b8df-f01509511d78", 00:11:39.548 "is_configured": true, 00:11:39.548 "data_offset": 2048, 00:11:39.548 "data_size": 63488 00:11:39.548 }, 00:11:39.548 { 00:11:39.548 "name": "BaseBdev2", 00:11:39.548 "uuid": "0c938225-028b-4846-96eb-2c93dbef5e3b", 00:11:39.548 "is_configured": true, 00:11:39.548 "data_offset": 2048, 00:11:39.548 "data_size": 63488 00:11:39.548 }, 00:11:39.548 { 00:11:39.548 "name": "BaseBdev3", 00:11:39.548 "uuid": "9f750fd5-025f-4f97-a8f1-6f2396f4d674", 00:11:39.548 "is_configured": true, 00:11:39.548 "data_offset": 2048, 00:11:39.548 "data_size": 63488 00:11:39.548 }, 00:11:39.548 { 00:11:39.548 "name": "BaseBdev4", 00:11:39.548 "uuid": "22b811aa-c4b4-4e65-826c-37f5d34728ee", 00:11:39.548 "is_configured": true, 00:11:39.548 "data_offset": 2048, 00:11:39.548 "data_size": 63488 00:11:39.548 } 00:11:39.548 ] 00:11:39.548 }' 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.548 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.808 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:39.808 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:39.808 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:39.808 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:39.808 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:39.808 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:39.808 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:39.808 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.808 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.808 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:39.808 [2024-10-13 02:25:58.485819] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.069 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.069 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:40.069 "name": "Existed_Raid", 00:11:40.069 "aliases": [ 00:11:40.069 "fdfbfe1a-2479-473c-966a-6c4e620ed5b9" 00:11:40.069 ], 00:11:40.069 "product_name": "Raid Volume", 00:11:40.069 "block_size": 512, 00:11:40.069 "num_blocks": 63488, 00:11:40.069 "uuid": "fdfbfe1a-2479-473c-966a-6c4e620ed5b9", 00:11:40.069 "assigned_rate_limits": { 00:11:40.069 "rw_ios_per_sec": 0, 00:11:40.069 "rw_mbytes_per_sec": 0, 00:11:40.069 "r_mbytes_per_sec": 0, 00:11:40.069 "w_mbytes_per_sec": 0 00:11:40.069 }, 00:11:40.069 "claimed": false, 00:11:40.069 "zoned": false, 00:11:40.069 "supported_io_types": { 00:11:40.069 "read": true, 00:11:40.069 "write": true, 00:11:40.069 "unmap": false, 00:11:40.069 "flush": false, 00:11:40.069 "reset": true, 00:11:40.069 "nvme_admin": false, 00:11:40.069 "nvme_io": false, 00:11:40.069 "nvme_io_md": false, 00:11:40.069 "write_zeroes": true, 00:11:40.069 "zcopy": false, 00:11:40.069 "get_zone_info": false, 00:11:40.069 "zone_management": false, 00:11:40.069 "zone_append": false, 00:11:40.069 "compare": false, 00:11:40.069 "compare_and_write": false, 00:11:40.069 "abort": false, 00:11:40.069 "seek_hole": false, 00:11:40.069 "seek_data": false, 00:11:40.069 "copy": false, 00:11:40.070 "nvme_iov_md": false 00:11:40.070 }, 00:11:40.070 "memory_domains": [ 00:11:40.070 { 00:11:40.070 "dma_device_id": "system", 00:11:40.070 "dma_device_type": 1 00:11:40.070 }, 00:11:40.070 { 00:11:40.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.070 "dma_device_type": 2 00:11:40.070 }, 00:11:40.070 { 00:11:40.070 "dma_device_id": "system", 00:11:40.070 "dma_device_type": 1 00:11:40.070 }, 00:11:40.070 { 00:11:40.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.070 "dma_device_type": 2 00:11:40.070 }, 00:11:40.070 { 00:11:40.070 "dma_device_id": "system", 00:11:40.070 "dma_device_type": 1 00:11:40.070 }, 00:11:40.070 { 00:11:40.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.070 "dma_device_type": 2 00:11:40.070 }, 00:11:40.070 { 00:11:40.070 "dma_device_id": "system", 00:11:40.070 "dma_device_type": 1 00:11:40.070 }, 00:11:40.070 { 00:11:40.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.070 "dma_device_type": 2 00:11:40.070 } 00:11:40.070 ], 00:11:40.070 "driver_specific": { 00:11:40.070 "raid": { 00:11:40.070 "uuid": "fdfbfe1a-2479-473c-966a-6c4e620ed5b9", 00:11:40.070 "strip_size_kb": 0, 00:11:40.070 "state": "online", 00:11:40.070 "raid_level": "raid1", 00:11:40.070 "superblock": true, 00:11:40.070 "num_base_bdevs": 4, 00:11:40.070 "num_base_bdevs_discovered": 4, 00:11:40.070 "num_base_bdevs_operational": 4, 00:11:40.070 "base_bdevs_list": [ 00:11:40.070 { 00:11:40.070 "name": "BaseBdev1", 00:11:40.070 "uuid": "2c22f0f0-2cfa-4ee3-b8df-f01509511d78", 00:11:40.070 "is_configured": true, 00:11:40.070 "data_offset": 2048, 00:11:40.070 "data_size": 63488 00:11:40.070 }, 00:11:40.070 { 00:11:40.070 "name": "BaseBdev2", 00:11:40.070 "uuid": "0c938225-028b-4846-96eb-2c93dbef5e3b", 00:11:40.070 "is_configured": true, 00:11:40.070 "data_offset": 2048, 00:11:40.070 "data_size": 63488 00:11:40.070 }, 00:11:40.070 { 00:11:40.070 "name": "BaseBdev3", 00:11:40.070 "uuid": "9f750fd5-025f-4f97-a8f1-6f2396f4d674", 00:11:40.070 "is_configured": true, 00:11:40.070 "data_offset": 2048, 00:11:40.070 "data_size": 63488 00:11:40.070 }, 00:11:40.070 { 00:11:40.070 "name": "BaseBdev4", 00:11:40.070 "uuid": "22b811aa-c4b4-4e65-826c-37f5d34728ee", 00:11:40.070 "is_configured": true, 00:11:40.070 "data_offset": 2048, 00:11:40.070 "data_size": 63488 00:11:40.070 } 00:11:40.070 ] 00:11:40.070 } 00:11:40.070 } 00:11:40.070 }' 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:40.070 BaseBdev2 00:11:40.070 BaseBdev3 00:11:40.070 BaseBdev4' 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.070 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.331 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.331 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.332 [2024-10-13 02:25:58.808860] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.332 "name": "Existed_Raid", 00:11:40.332 "uuid": "fdfbfe1a-2479-473c-966a-6c4e620ed5b9", 00:11:40.332 "strip_size_kb": 0, 00:11:40.332 "state": "online", 00:11:40.332 "raid_level": "raid1", 00:11:40.332 "superblock": true, 00:11:40.332 "num_base_bdevs": 4, 00:11:40.332 "num_base_bdevs_discovered": 3, 00:11:40.332 "num_base_bdevs_operational": 3, 00:11:40.332 "base_bdevs_list": [ 00:11:40.332 { 00:11:40.332 "name": null, 00:11:40.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.332 "is_configured": false, 00:11:40.332 "data_offset": 0, 00:11:40.332 "data_size": 63488 00:11:40.332 }, 00:11:40.332 { 00:11:40.332 "name": "BaseBdev2", 00:11:40.332 "uuid": "0c938225-028b-4846-96eb-2c93dbef5e3b", 00:11:40.332 "is_configured": true, 00:11:40.332 "data_offset": 2048, 00:11:40.332 "data_size": 63488 00:11:40.332 }, 00:11:40.332 { 00:11:40.332 "name": "BaseBdev3", 00:11:40.332 "uuid": "9f750fd5-025f-4f97-a8f1-6f2396f4d674", 00:11:40.332 "is_configured": true, 00:11:40.332 "data_offset": 2048, 00:11:40.332 "data_size": 63488 00:11:40.332 }, 00:11:40.332 { 00:11:40.332 "name": "BaseBdev4", 00:11:40.332 "uuid": "22b811aa-c4b4-4e65-826c-37f5d34728ee", 00:11:40.332 "is_configured": true, 00:11:40.332 "data_offset": 2048, 00:11:40.332 "data_size": 63488 00:11:40.332 } 00:11:40.332 ] 00:11:40.332 }' 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.332 02:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.593 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:40.593 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:40.593 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.593 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.593 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:40.593 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.593 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.593 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:40.593 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:40.593 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:40.593 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.593 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.593 [2024-10-13 02:25:59.260916] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.854 [2024-10-13 02:25:59.337441] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.854 [2024-10-13 02:25:59.418040] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:40.854 [2024-10-13 02:25:59.418160] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:40.854 [2024-10-13 02:25:59.439320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.854 [2024-10-13 02:25:59.439372] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.854 [2024-10-13 02:25:59.439385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.854 BaseBdev2 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.854 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.116 [ 00:11:41.116 { 00:11:41.116 "name": "BaseBdev2", 00:11:41.116 "aliases": [ 00:11:41.116 "5cad8050-0923-4ffe-a72a-719f7bac4ecf" 00:11:41.116 ], 00:11:41.116 "product_name": "Malloc disk", 00:11:41.116 "block_size": 512, 00:11:41.116 "num_blocks": 65536, 00:11:41.116 "uuid": "5cad8050-0923-4ffe-a72a-719f7bac4ecf", 00:11:41.116 "assigned_rate_limits": { 00:11:41.116 "rw_ios_per_sec": 0, 00:11:41.116 "rw_mbytes_per_sec": 0, 00:11:41.116 "r_mbytes_per_sec": 0, 00:11:41.116 "w_mbytes_per_sec": 0 00:11:41.116 }, 00:11:41.116 "claimed": false, 00:11:41.116 "zoned": false, 00:11:41.116 "supported_io_types": { 00:11:41.116 "read": true, 00:11:41.116 "write": true, 00:11:41.116 "unmap": true, 00:11:41.116 "flush": true, 00:11:41.116 "reset": true, 00:11:41.116 "nvme_admin": false, 00:11:41.116 "nvme_io": false, 00:11:41.116 "nvme_io_md": false, 00:11:41.116 "write_zeroes": true, 00:11:41.116 "zcopy": true, 00:11:41.116 "get_zone_info": false, 00:11:41.116 "zone_management": false, 00:11:41.116 "zone_append": false, 00:11:41.116 "compare": false, 00:11:41.116 "compare_and_write": false, 00:11:41.116 "abort": true, 00:11:41.116 "seek_hole": false, 00:11:41.116 "seek_data": false, 00:11:41.116 "copy": true, 00:11:41.116 "nvme_iov_md": false 00:11:41.116 }, 00:11:41.116 "memory_domains": [ 00:11:41.116 { 00:11:41.116 "dma_device_id": "system", 00:11:41.116 "dma_device_type": 1 00:11:41.116 }, 00:11:41.116 { 00:11:41.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.116 "dma_device_type": 2 00:11:41.116 } 00:11:41.116 ], 00:11:41.116 "driver_specific": {} 00:11:41.116 } 00:11:41.116 ] 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.116 BaseBdev3 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.116 [ 00:11:41.116 { 00:11:41.116 "name": "BaseBdev3", 00:11:41.116 "aliases": [ 00:11:41.116 "f2646415-52a0-41e4-a0b7-6e69d88ffdc4" 00:11:41.116 ], 00:11:41.116 "product_name": "Malloc disk", 00:11:41.116 "block_size": 512, 00:11:41.116 "num_blocks": 65536, 00:11:41.116 "uuid": "f2646415-52a0-41e4-a0b7-6e69d88ffdc4", 00:11:41.116 "assigned_rate_limits": { 00:11:41.116 "rw_ios_per_sec": 0, 00:11:41.116 "rw_mbytes_per_sec": 0, 00:11:41.116 "r_mbytes_per_sec": 0, 00:11:41.116 "w_mbytes_per_sec": 0 00:11:41.116 }, 00:11:41.116 "claimed": false, 00:11:41.116 "zoned": false, 00:11:41.116 "supported_io_types": { 00:11:41.116 "read": true, 00:11:41.116 "write": true, 00:11:41.116 "unmap": true, 00:11:41.116 "flush": true, 00:11:41.116 "reset": true, 00:11:41.116 "nvme_admin": false, 00:11:41.116 "nvme_io": false, 00:11:41.116 "nvme_io_md": false, 00:11:41.116 "write_zeroes": true, 00:11:41.116 "zcopy": true, 00:11:41.116 "get_zone_info": false, 00:11:41.116 "zone_management": false, 00:11:41.116 "zone_append": false, 00:11:41.116 "compare": false, 00:11:41.116 "compare_and_write": false, 00:11:41.116 "abort": true, 00:11:41.116 "seek_hole": false, 00:11:41.116 "seek_data": false, 00:11:41.116 "copy": true, 00:11:41.116 "nvme_iov_md": false 00:11:41.116 }, 00:11:41.116 "memory_domains": [ 00:11:41.116 { 00:11:41.116 "dma_device_id": "system", 00:11:41.116 "dma_device_type": 1 00:11:41.116 }, 00:11:41.116 { 00:11:41.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.116 "dma_device_type": 2 00:11:41.116 } 00:11:41.116 ], 00:11:41.116 "driver_specific": {} 00:11:41.116 } 00:11:41.116 ] 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.116 BaseBdev4 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.116 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.117 [ 00:11:41.117 { 00:11:41.117 "name": "BaseBdev4", 00:11:41.117 "aliases": [ 00:11:41.117 "e2ebce8c-6a20-422e-ac0a-da9ec4b2d005" 00:11:41.117 ], 00:11:41.117 "product_name": "Malloc disk", 00:11:41.117 "block_size": 512, 00:11:41.117 "num_blocks": 65536, 00:11:41.117 "uuid": "e2ebce8c-6a20-422e-ac0a-da9ec4b2d005", 00:11:41.117 "assigned_rate_limits": { 00:11:41.117 "rw_ios_per_sec": 0, 00:11:41.117 "rw_mbytes_per_sec": 0, 00:11:41.117 "r_mbytes_per_sec": 0, 00:11:41.117 "w_mbytes_per_sec": 0 00:11:41.117 }, 00:11:41.117 "claimed": false, 00:11:41.117 "zoned": false, 00:11:41.117 "supported_io_types": { 00:11:41.117 "read": true, 00:11:41.117 "write": true, 00:11:41.117 "unmap": true, 00:11:41.117 "flush": true, 00:11:41.117 "reset": true, 00:11:41.117 "nvme_admin": false, 00:11:41.117 "nvme_io": false, 00:11:41.117 "nvme_io_md": false, 00:11:41.117 "write_zeroes": true, 00:11:41.117 "zcopy": true, 00:11:41.117 "get_zone_info": false, 00:11:41.117 "zone_management": false, 00:11:41.117 "zone_append": false, 00:11:41.117 "compare": false, 00:11:41.117 "compare_and_write": false, 00:11:41.117 "abort": true, 00:11:41.117 "seek_hole": false, 00:11:41.117 "seek_data": false, 00:11:41.117 "copy": true, 00:11:41.117 "nvme_iov_md": false 00:11:41.117 }, 00:11:41.117 "memory_domains": [ 00:11:41.117 { 00:11:41.117 "dma_device_id": "system", 00:11:41.117 "dma_device_type": 1 00:11:41.117 }, 00:11:41.117 { 00:11:41.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.117 "dma_device_type": 2 00:11:41.117 } 00:11:41.117 ], 00:11:41.117 "driver_specific": {} 00:11:41.117 } 00:11:41.117 ] 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.117 [2024-10-13 02:25:59.674392] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:41.117 [2024-10-13 02:25:59.674445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:41.117 [2024-10-13 02:25:59.674465] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.117 [2024-10-13 02:25:59.676561] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:41.117 [2024-10-13 02:25:59.676609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.117 "name": "Existed_Raid", 00:11:41.117 "uuid": "2d7ecc06-84a2-4430-86b9-88c672eece3d", 00:11:41.117 "strip_size_kb": 0, 00:11:41.117 "state": "configuring", 00:11:41.117 "raid_level": "raid1", 00:11:41.117 "superblock": true, 00:11:41.117 "num_base_bdevs": 4, 00:11:41.117 "num_base_bdevs_discovered": 3, 00:11:41.117 "num_base_bdevs_operational": 4, 00:11:41.117 "base_bdevs_list": [ 00:11:41.117 { 00:11:41.117 "name": "BaseBdev1", 00:11:41.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.117 "is_configured": false, 00:11:41.117 "data_offset": 0, 00:11:41.117 "data_size": 0 00:11:41.117 }, 00:11:41.117 { 00:11:41.117 "name": "BaseBdev2", 00:11:41.117 "uuid": "5cad8050-0923-4ffe-a72a-719f7bac4ecf", 00:11:41.117 "is_configured": true, 00:11:41.117 "data_offset": 2048, 00:11:41.117 "data_size": 63488 00:11:41.117 }, 00:11:41.117 { 00:11:41.117 "name": "BaseBdev3", 00:11:41.117 "uuid": "f2646415-52a0-41e4-a0b7-6e69d88ffdc4", 00:11:41.117 "is_configured": true, 00:11:41.117 "data_offset": 2048, 00:11:41.117 "data_size": 63488 00:11:41.117 }, 00:11:41.117 { 00:11:41.117 "name": "BaseBdev4", 00:11:41.117 "uuid": "e2ebce8c-6a20-422e-ac0a-da9ec4b2d005", 00:11:41.117 "is_configured": true, 00:11:41.117 "data_offset": 2048, 00:11:41.117 "data_size": 63488 00:11:41.117 } 00:11:41.117 ] 00:11:41.117 }' 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.117 02:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.688 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:41.688 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.688 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.688 [2024-10-13 02:26:00.149581] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:41.688 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.688 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.688 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.688 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.688 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.688 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.688 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.688 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.688 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.688 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.688 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.688 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.689 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.689 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.689 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.689 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.689 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.689 "name": "Existed_Raid", 00:11:41.689 "uuid": "2d7ecc06-84a2-4430-86b9-88c672eece3d", 00:11:41.689 "strip_size_kb": 0, 00:11:41.689 "state": "configuring", 00:11:41.689 "raid_level": "raid1", 00:11:41.689 "superblock": true, 00:11:41.689 "num_base_bdevs": 4, 00:11:41.689 "num_base_bdevs_discovered": 2, 00:11:41.689 "num_base_bdevs_operational": 4, 00:11:41.689 "base_bdevs_list": [ 00:11:41.689 { 00:11:41.689 "name": "BaseBdev1", 00:11:41.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.689 "is_configured": false, 00:11:41.689 "data_offset": 0, 00:11:41.689 "data_size": 0 00:11:41.689 }, 00:11:41.689 { 00:11:41.689 "name": null, 00:11:41.689 "uuid": "5cad8050-0923-4ffe-a72a-719f7bac4ecf", 00:11:41.689 "is_configured": false, 00:11:41.689 "data_offset": 0, 00:11:41.689 "data_size": 63488 00:11:41.689 }, 00:11:41.689 { 00:11:41.689 "name": "BaseBdev3", 00:11:41.689 "uuid": "f2646415-52a0-41e4-a0b7-6e69d88ffdc4", 00:11:41.689 "is_configured": true, 00:11:41.689 "data_offset": 2048, 00:11:41.689 "data_size": 63488 00:11:41.689 }, 00:11:41.689 { 00:11:41.689 "name": "BaseBdev4", 00:11:41.689 "uuid": "e2ebce8c-6a20-422e-ac0a-da9ec4b2d005", 00:11:41.689 "is_configured": true, 00:11:41.689 "data_offset": 2048, 00:11:41.689 "data_size": 63488 00:11:41.689 } 00:11:41.689 ] 00:11:41.689 }' 00:11:41.689 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.689 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.949 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.949 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.949 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.949 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.949 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.209 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:42.209 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:42.209 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.209 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.209 [2024-10-13 02:26:00.657742] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.209 BaseBdev1 00:11:42.209 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.209 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:42.209 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:42.209 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:42.209 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:42.209 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:42.209 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:42.209 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:42.209 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.209 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.209 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.209 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:42.209 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.209 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.209 [ 00:11:42.209 { 00:11:42.209 "name": "BaseBdev1", 00:11:42.209 "aliases": [ 00:11:42.209 "0948c477-d63e-43a6-b7e1-fca4f118983a" 00:11:42.209 ], 00:11:42.209 "product_name": "Malloc disk", 00:11:42.209 "block_size": 512, 00:11:42.209 "num_blocks": 65536, 00:11:42.209 "uuid": "0948c477-d63e-43a6-b7e1-fca4f118983a", 00:11:42.209 "assigned_rate_limits": { 00:11:42.209 "rw_ios_per_sec": 0, 00:11:42.209 "rw_mbytes_per_sec": 0, 00:11:42.209 "r_mbytes_per_sec": 0, 00:11:42.209 "w_mbytes_per_sec": 0 00:11:42.209 }, 00:11:42.210 "claimed": true, 00:11:42.210 "claim_type": "exclusive_write", 00:11:42.210 "zoned": false, 00:11:42.210 "supported_io_types": { 00:11:42.210 "read": true, 00:11:42.210 "write": true, 00:11:42.210 "unmap": true, 00:11:42.210 "flush": true, 00:11:42.210 "reset": true, 00:11:42.210 "nvme_admin": false, 00:11:42.210 "nvme_io": false, 00:11:42.210 "nvme_io_md": false, 00:11:42.210 "write_zeroes": true, 00:11:42.210 "zcopy": true, 00:11:42.210 "get_zone_info": false, 00:11:42.210 "zone_management": false, 00:11:42.210 "zone_append": false, 00:11:42.210 "compare": false, 00:11:42.210 "compare_and_write": false, 00:11:42.210 "abort": true, 00:11:42.210 "seek_hole": false, 00:11:42.210 "seek_data": false, 00:11:42.210 "copy": true, 00:11:42.210 "nvme_iov_md": false 00:11:42.210 }, 00:11:42.210 "memory_domains": [ 00:11:42.210 { 00:11:42.210 "dma_device_id": "system", 00:11:42.210 "dma_device_type": 1 00:11:42.210 }, 00:11:42.210 { 00:11:42.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.210 "dma_device_type": 2 00:11:42.210 } 00:11:42.210 ], 00:11:42.210 "driver_specific": {} 00:11:42.210 } 00:11:42.210 ] 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.210 "name": "Existed_Raid", 00:11:42.210 "uuid": "2d7ecc06-84a2-4430-86b9-88c672eece3d", 00:11:42.210 "strip_size_kb": 0, 00:11:42.210 "state": "configuring", 00:11:42.210 "raid_level": "raid1", 00:11:42.210 "superblock": true, 00:11:42.210 "num_base_bdevs": 4, 00:11:42.210 "num_base_bdevs_discovered": 3, 00:11:42.210 "num_base_bdevs_operational": 4, 00:11:42.210 "base_bdevs_list": [ 00:11:42.210 { 00:11:42.210 "name": "BaseBdev1", 00:11:42.210 "uuid": "0948c477-d63e-43a6-b7e1-fca4f118983a", 00:11:42.210 "is_configured": true, 00:11:42.210 "data_offset": 2048, 00:11:42.210 "data_size": 63488 00:11:42.210 }, 00:11:42.210 { 00:11:42.210 "name": null, 00:11:42.210 "uuid": "5cad8050-0923-4ffe-a72a-719f7bac4ecf", 00:11:42.210 "is_configured": false, 00:11:42.210 "data_offset": 0, 00:11:42.210 "data_size": 63488 00:11:42.210 }, 00:11:42.210 { 00:11:42.210 "name": "BaseBdev3", 00:11:42.210 "uuid": "f2646415-52a0-41e4-a0b7-6e69d88ffdc4", 00:11:42.210 "is_configured": true, 00:11:42.210 "data_offset": 2048, 00:11:42.210 "data_size": 63488 00:11:42.210 }, 00:11:42.210 { 00:11:42.210 "name": "BaseBdev4", 00:11:42.210 "uuid": "e2ebce8c-6a20-422e-ac0a-da9ec4b2d005", 00:11:42.210 "is_configured": true, 00:11:42.210 "data_offset": 2048, 00:11:42.210 "data_size": 63488 00:11:42.210 } 00:11:42.210 ] 00:11:42.210 }' 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.210 02:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.781 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.781 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:42.781 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.781 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.781 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.781 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:42.781 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:42.781 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.781 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.781 [2024-10-13 02:26:01.204893] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:42.781 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.781 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:42.781 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.781 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.781 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.781 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.781 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.781 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.781 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.782 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.782 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.782 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.782 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.782 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.782 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.782 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.782 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.782 "name": "Existed_Raid", 00:11:42.782 "uuid": "2d7ecc06-84a2-4430-86b9-88c672eece3d", 00:11:42.782 "strip_size_kb": 0, 00:11:42.782 "state": "configuring", 00:11:42.782 "raid_level": "raid1", 00:11:42.782 "superblock": true, 00:11:42.782 "num_base_bdevs": 4, 00:11:42.782 "num_base_bdevs_discovered": 2, 00:11:42.782 "num_base_bdevs_operational": 4, 00:11:42.782 "base_bdevs_list": [ 00:11:42.782 { 00:11:42.782 "name": "BaseBdev1", 00:11:42.782 "uuid": "0948c477-d63e-43a6-b7e1-fca4f118983a", 00:11:42.782 "is_configured": true, 00:11:42.782 "data_offset": 2048, 00:11:42.782 "data_size": 63488 00:11:42.782 }, 00:11:42.782 { 00:11:42.782 "name": null, 00:11:42.782 "uuid": "5cad8050-0923-4ffe-a72a-719f7bac4ecf", 00:11:42.782 "is_configured": false, 00:11:42.782 "data_offset": 0, 00:11:42.782 "data_size": 63488 00:11:42.782 }, 00:11:42.782 { 00:11:42.782 "name": null, 00:11:42.782 "uuid": "f2646415-52a0-41e4-a0b7-6e69d88ffdc4", 00:11:42.782 "is_configured": false, 00:11:42.782 "data_offset": 0, 00:11:42.782 "data_size": 63488 00:11:42.782 }, 00:11:42.782 { 00:11:42.782 "name": "BaseBdev4", 00:11:42.782 "uuid": "e2ebce8c-6a20-422e-ac0a-da9ec4b2d005", 00:11:42.782 "is_configured": true, 00:11:42.782 "data_offset": 2048, 00:11:42.782 "data_size": 63488 00:11:42.782 } 00:11:42.782 ] 00:11:42.782 }' 00:11:42.782 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.782 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.042 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.042 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.042 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.042 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:43.042 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.302 [2024-10-13 02:26:01.744064] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.302 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.302 "name": "Existed_Raid", 00:11:43.302 "uuid": "2d7ecc06-84a2-4430-86b9-88c672eece3d", 00:11:43.302 "strip_size_kb": 0, 00:11:43.302 "state": "configuring", 00:11:43.302 "raid_level": "raid1", 00:11:43.302 "superblock": true, 00:11:43.302 "num_base_bdevs": 4, 00:11:43.302 "num_base_bdevs_discovered": 3, 00:11:43.302 "num_base_bdevs_operational": 4, 00:11:43.302 "base_bdevs_list": [ 00:11:43.302 { 00:11:43.302 "name": "BaseBdev1", 00:11:43.302 "uuid": "0948c477-d63e-43a6-b7e1-fca4f118983a", 00:11:43.302 "is_configured": true, 00:11:43.302 "data_offset": 2048, 00:11:43.302 "data_size": 63488 00:11:43.302 }, 00:11:43.302 { 00:11:43.302 "name": null, 00:11:43.302 "uuid": "5cad8050-0923-4ffe-a72a-719f7bac4ecf", 00:11:43.302 "is_configured": false, 00:11:43.302 "data_offset": 0, 00:11:43.302 "data_size": 63488 00:11:43.302 }, 00:11:43.302 { 00:11:43.302 "name": "BaseBdev3", 00:11:43.302 "uuid": "f2646415-52a0-41e4-a0b7-6e69d88ffdc4", 00:11:43.302 "is_configured": true, 00:11:43.302 "data_offset": 2048, 00:11:43.302 "data_size": 63488 00:11:43.302 }, 00:11:43.302 { 00:11:43.302 "name": "BaseBdev4", 00:11:43.302 "uuid": "e2ebce8c-6a20-422e-ac0a-da9ec4b2d005", 00:11:43.302 "is_configured": true, 00:11:43.302 "data_offset": 2048, 00:11:43.302 "data_size": 63488 00:11:43.302 } 00:11:43.302 ] 00:11:43.302 }' 00:11:43.303 02:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.303 02:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.563 [2024-10-13 02:26:02.215164] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.563 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.823 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.823 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.823 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.823 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.823 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.823 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.823 "name": "Existed_Raid", 00:11:43.823 "uuid": "2d7ecc06-84a2-4430-86b9-88c672eece3d", 00:11:43.823 "strip_size_kb": 0, 00:11:43.823 "state": "configuring", 00:11:43.823 "raid_level": "raid1", 00:11:43.823 "superblock": true, 00:11:43.823 "num_base_bdevs": 4, 00:11:43.823 "num_base_bdevs_discovered": 2, 00:11:43.823 "num_base_bdevs_operational": 4, 00:11:43.823 "base_bdevs_list": [ 00:11:43.823 { 00:11:43.823 "name": null, 00:11:43.823 "uuid": "0948c477-d63e-43a6-b7e1-fca4f118983a", 00:11:43.823 "is_configured": false, 00:11:43.823 "data_offset": 0, 00:11:43.823 "data_size": 63488 00:11:43.823 }, 00:11:43.823 { 00:11:43.823 "name": null, 00:11:43.823 "uuid": "5cad8050-0923-4ffe-a72a-719f7bac4ecf", 00:11:43.823 "is_configured": false, 00:11:43.823 "data_offset": 0, 00:11:43.823 "data_size": 63488 00:11:43.823 }, 00:11:43.823 { 00:11:43.823 "name": "BaseBdev3", 00:11:43.823 "uuid": "f2646415-52a0-41e4-a0b7-6e69d88ffdc4", 00:11:43.823 "is_configured": true, 00:11:43.823 "data_offset": 2048, 00:11:43.823 "data_size": 63488 00:11:43.823 }, 00:11:43.823 { 00:11:43.823 "name": "BaseBdev4", 00:11:43.823 "uuid": "e2ebce8c-6a20-422e-ac0a-da9ec4b2d005", 00:11:43.823 "is_configured": true, 00:11:43.823 "data_offset": 2048, 00:11:43.823 "data_size": 63488 00:11:43.823 } 00:11:43.823 ] 00:11:43.823 }' 00:11:43.823 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.823 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.083 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.084 [2024-10-13 02:26:02.693905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.084 "name": "Existed_Raid", 00:11:44.084 "uuid": "2d7ecc06-84a2-4430-86b9-88c672eece3d", 00:11:44.084 "strip_size_kb": 0, 00:11:44.084 "state": "configuring", 00:11:44.084 "raid_level": "raid1", 00:11:44.084 "superblock": true, 00:11:44.084 "num_base_bdevs": 4, 00:11:44.084 "num_base_bdevs_discovered": 3, 00:11:44.084 "num_base_bdevs_operational": 4, 00:11:44.084 "base_bdevs_list": [ 00:11:44.084 { 00:11:44.084 "name": null, 00:11:44.084 "uuid": "0948c477-d63e-43a6-b7e1-fca4f118983a", 00:11:44.084 "is_configured": false, 00:11:44.084 "data_offset": 0, 00:11:44.084 "data_size": 63488 00:11:44.084 }, 00:11:44.084 { 00:11:44.084 "name": "BaseBdev2", 00:11:44.084 "uuid": "5cad8050-0923-4ffe-a72a-719f7bac4ecf", 00:11:44.084 "is_configured": true, 00:11:44.084 "data_offset": 2048, 00:11:44.084 "data_size": 63488 00:11:44.084 }, 00:11:44.084 { 00:11:44.084 "name": "BaseBdev3", 00:11:44.084 "uuid": "f2646415-52a0-41e4-a0b7-6e69d88ffdc4", 00:11:44.084 "is_configured": true, 00:11:44.084 "data_offset": 2048, 00:11:44.084 "data_size": 63488 00:11:44.084 }, 00:11:44.084 { 00:11:44.084 "name": "BaseBdev4", 00:11:44.084 "uuid": "e2ebce8c-6a20-422e-ac0a-da9ec4b2d005", 00:11:44.084 "is_configured": true, 00:11:44.084 "data_offset": 2048, 00:11:44.084 "data_size": 63488 00:11:44.084 } 00:11:44.084 ] 00:11:44.084 }' 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.084 02:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0948c477-d63e-43a6-b7e1-fca4f118983a 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.654 [2024-10-13 02:26:03.269633] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:44.654 [2024-10-13 02:26:03.269934] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:11:44.654 [2024-10-13 02:26:03.269988] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:44.654 [2024-10-13 02:26:03.270337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:11:44.654 [2024-10-13 02:26:03.270508] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:11:44.654 NewBaseBdev 00:11:44.654 [2024-10-13 02:26:03.270555] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:11:44.654 [2024-10-13 02:26:03.270696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.654 [ 00:11:44.654 { 00:11:44.654 "name": "NewBaseBdev", 00:11:44.654 "aliases": [ 00:11:44.654 "0948c477-d63e-43a6-b7e1-fca4f118983a" 00:11:44.654 ], 00:11:44.654 "product_name": "Malloc disk", 00:11:44.654 "block_size": 512, 00:11:44.654 "num_blocks": 65536, 00:11:44.654 "uuid": "0948c477-d63e-43a6-b7e1-fca4f118983a", 00:11:44.654 "assigned_rate_limits": { 00:11:44.654 "rw_ios_per_sec": 0, 00:11:44.654 "rw_mbytes_per_sec": 0, 00:11:44.654 "r_mbytes_per_sec": 0, 00:11:44.654 "w_mbytes_per_sec": 0 00:11:44.654 }, 00:11:44.654 "claimed": true, 00:11:44.654 "claim_type": "exclusive_write", 00:11:44.654 "zoned": false, 00:11:44.654 "supported_io_types": { 00:11:44.654 "read": true, 00:11:44.654 "write": true, 00:11:44.654 "unmap": true, 00:11:44.654 "flush": true, 00:11:44.654 "reset": true, 00:11:44.654 "nvme_admin": false, 00:11:44.654 "nvme_io": false, 00:11:44.654 "nvme_io_md": false, 00:11:44.654 "write_zeroes": true, 00:11:44.654 "zcopy": true, 00:11:44.654 "get_zone_info": false, 00:11:44.654 "zone_management": false, 00:11:44.654 "zone_append": false, 00:11:44.654 "compare": false, 00:11:44.654 "compare_and_write": false, 00:11:44.654 "abort": true, 00:11:44.654 "seek_hole": false, 00:11:44.654 "seek_data": false, 00:11:44.654 "copy": true, 00:11:44.654 "nvme_iov_md": false 00:11:44.654 }, 00:11:44.654 "memory_domains": [ 00:11:44.654 { 00:11:44.654 "dma_device_id": "system", 00:11:44.654 "dma_device_type": 1 00:11:44.654 }, 00:11:44.654 { 00:11:44.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.654 "dma_device_type": 2 00:11:44.654 } 00:11:44.654 ], 00:11:44.654 "driver_specific": {} 00:11:44.654 } 00:11:44.654 ] 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.654 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.915 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.915 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.915 "name": "Existed_Raid", 00:11:44.915 "uuid": "2d7ecc06-84a2-4430-86b9-88c672eece3d", 00:11:44.915 "strip_size_kb": 0, 00:11:44.915 "state": "online", 00:11:44.915 "raid_level": "raid1", 00:11:44.915 "superblock": true, 00:11:44.915 "num_base_bdevs": 4, 00:11:44.915 "num_base_bdevs_discovered": 4, 00:11:44.915 "num_base_bdevs_operational": 4, 00:11:44.915 "base_bdevs_list": [ 00:11:44.915 { 00:11:44.915 "name": "NewBaseBdev", 00:11:44.915 "uuid": "0948c477-d63e-43a6-b7e1-fca4f118983a", 00:11:44.915 "is_configured": true, 00:11:44.915 "data_offset": 2048, 00:11:44.915 "data_size": 63488 00:11:44.915 }, 00:11:44.915 { 00:11:44.915 "name": "BaseBdev2", 00:11:44.915 "uuid": "5cad8050-0923-4ffe-a72a-719f7bac4ecf", 00:11:44.915 "is_configured": true, 00:11:44.915 "data_offset": 2048, 00:11:44.915 "data_size": 63488 00:11:44.915 }, 00:11:44.915 { 00:11:44.915 "name": "BaseBdev3", 00:11:44.915 "uuid": "f2646415-52a0-41e4-a0b7-6e69d88ffdc4", 00:11:44.915 "is_configured": true, 00:11:44.915 "data_offset": 2048, 00:11:44.915 "data_size": 63488 00:11:44.915 }, 00:11:44.915 { 00:11:44.915 "name": "BaseBdev4", 00:11:44.915 "uuid": "e2ebce8c-6a20-422e-ac0a-da9ec4b2d005", 00:11:44.915 "is_configured": true, 00:11:44.915 "data_offset": 2048, 00:11:44.915 "data_size": 63488 00:11:44.915 } 00:11:44.915 ] 00:11:44.915 }' 00:11:44.915 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.915 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.176 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:45.176 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:45.176 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:45.176 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:45.176 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:45.176 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:45.176 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:45.176 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.176 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:45.176 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.176 [2024-10-13 02:26:03.769175] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.176 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.176 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:45.176 "name": "Existed_Raid", 00:11:45.176 "aliases": [ 00:11:45.176 "2d7ecc06-84a2-4430-86b9-88c672eece3d" 00:11:45.176 ], 00:11:45.176 "product_name": "Raid Volume", 00:11:45.176 "block_size": 512, 00:11:45.176 "num_blocks": 63488, 00:11:45.176 "uuid": "2d7ecc06-84a2-4430-86b9-88c672eece3d", 00:11:45.176 "assigned_rate_limits": { 00:11:45.176 "rw_ios_per_sec": 0, 00:11:45.176 "rw_mbytes_per_sec": 0, 00:11:45.176 "r_mbytes_per_sec": 0, 00:11:45.176 "w_mbytes_per_sec": 0 00:11:45.176 }, 00:11:45.176 "claimed": false, 00:11:45.176 "zoned": false, 00:11:45.176 "supported_io_types": { 00:11:45.176 "read": true, 00:11:45.176 "write": true, 00:11:45.176 "unmap": false, 00:11:45.176 "flush": false, 00:11:45.176 "reset": true, 00:11:45.176 "nvme_admin": false, 00:11:45.176 "nvme_io": false, 00:11:45.176 "nvme_io_md": false, 00:11:45.176 "write_zeroes": true, 00:11:45.176 "zcopy": false, 00:11:45.176 "get_zone_info": false, 00:11:45.176 "zone_management": false, 00:11:45.176 "zone_append": false, 00:11:45.176 "compare": false, 00:11:45.176 "compare_and_write": false, 00:11:45.176 "abort": false, 00:11:45.176 "seek_hole": false, 00:11:45.176 "seek_data": false, 00:11:45.176 "copy": false, 00:11:45.176 "nvme_iov_md": false 00:11:45.176 }, 00:11:45.176 "memory_domains": [ 00:11:45.176 { 00:11:45.176 "dma_device_id": "system", 00:11:45.176 "dma_device_type": 1 00:11:45.176 }, 00:11:45.176 { 00:11:45.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.176 "dma_device_type": 2 00:11:45.176 }, 00:11:45.176 { 00:11:45.176 "dma_device_id": "system", 00:11:45.176 "dma_device_type": 1 00:11:45.176 }, 00:11:45.176 { 00:11:45.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.176 "dma_device_type": 2 00:11:45.176 }, 00:11:45.176 { 00:11:45.176 "dma_device_id": "system", 00:11:45.176 "dma_device_type": 1 00:11:45.176 }, 00:11:45.176 { 00:11:45.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.176 "dma_device_type": 2 00:11:45.176 }, 00:11:45.176 { 00:11:45.176 "dma_device_id": "system", 00:11:45.176 "dma_device_type": 1 00:11:45.176 }, 00:11:45.176 { 00:11:45.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.176 "dma_device_type": 2 00:11:45.176 } 00:11:45.176 ], 00:11:45.176 "driver_specific": { 00:11:45.176 "raid": { 00:11:45.176 "uuid": "2d7ecc06-84a2-4430-86b9-88c672eece3d", 00:11:45.176 "strip_size_kb": 0, 00:11:45.176 "state": "online", 00:11:45.176 "raid_level": "raid1", 00:11:45.176 "superblock": true, 00:11:45.176 "num_base_bdevs": 4, 00:11:45.176 "num_base_bdevs_discovered": 4, 00:11:45.176 "num_base_bdevs_operational": 4, 00:11:45.176 "base_bdevs_list": [ 00:11:45.176 { 00:11:45.176 "name": "NewBaseBdev", 00:11:45.176 "uuid": "0948c477-d63e-43a6-b7e1-fca4f118983a", 00:11:45.176 "is_configured": true, 00:11:45.176 "data_offset": 2048, 00:11:45.176 "data_size": 63488 00:11:45.176 }, 00:11:45.176 { 00:11:45.176 "name": "BaseBdev2", 00:11:45.176 "uuid": "5cad8050-0923-4ffe-a72a-719f7bac4ecf", 00:11:45.176 "is_configured": true, 00:11:45.176 "data_offset": 2048, 00:11:45.176 "data_size": 63488 00:11:45.176 }, 00:11:45.176 { 00:11:45.176 "name": "BaseBdev3", 00:11:45.176 "uuid": "f2646415-52a0-41e4-a0b7-6e69d88ffdc4", 00:11:45.176 "is_configured": true, 00:11:45.176 "data_offset": 2048, 00:11:45.176 "data_size": 63488 00:11:45.176 }, 00:11:45.176 { 00:11:45.176 "name": "BaseBdev4", 00:11:45.176 "uuid": "e2ebce8c-6a20-422e-ac0a-da9ec4b2d005", 00:11:45.176 "is_configured": true, 00:11:45.176 "data_offset": 2048, 00:11:45.176 "data_size": 63488 00:11:45.176 } 00:11:45.176 ] 00:11:45.176 } 00:11:45.176 } 00:11:45.176 }' 00:11:45.176 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:45.437 BaseBdev2 00:11:45.437 BaseBdev3 00:11:45.437 BaseBdev4' 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.437 02:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.437 [2024-10-13 02:26:04.068310] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:45.437 [2024-10-13 02:26:04.068340] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.437 [2024-10-13 02:26:04.068425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.437 [2024-10-13 02:26:04.068711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:45.437 [2024-10-13 02:26:04.068727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84551 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84551 ']' 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84551 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84551 00:11:45.437 killing process with pid 84551 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84551' 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84551 00:11:45.437 [2024-10-13 02:26:04.117046] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:45.437 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84551 00:11:45.699 [2024-10-13 02:26:04.194486] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:45.972 ************************************ 00:11:45.972 END TEST raid_state_function_test_sb 00:11:45.973 ************************************ 00:11:45.973 02:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:45.973 00:11:45.973 real 0m9.828s 00:11:45.973 user 0m16.446s 00:11:45.973 sys 0m2.158s 00:11:45.973 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.973 02:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.973 02:26:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:45.973 02:26:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:45.973 02:26:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.973 02:26:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:45.973 ************************************ 00:11:45.973 START TEST raid_superblock_test 00:11:45.973 ************************************ 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85205 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85205 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 85205 ']' 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:45.973 02:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.247 02:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:46.247 02:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.247 [2024-10-13 02:26:04.727925] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:46.247 [2024-10-13 02:26:04.728173] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85205 ] 00:11:46.247 [2024-10-13 02:26:04.875098] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.507 [2024-10-13 02:26:04.946267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.507 [2024-10-13 02:26:05.023366] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.507 [2024-10-13 02:26:05.023434] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.080 malloc1 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.080 [2024-10-13 02:26:05.589827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:47.080 [2024-10-13 02:26:05.589952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.080 [2024-10-13 02:26:05.589988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:47.080 [2024-10-13 02:26:05.590023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.080 [2024-10-13 02:26:05.592531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.080 [2024-10-13 02:26:05.592606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:47.080 pt1 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.080 malloc2 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.080 [2024-10-13 02:26:05.634664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.080 [2024-10-13 02:26:05.634722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.080 [2024-10-13 02:26:05.634738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:47.080 [2024-10-13 02:26:05.634751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.080 [2024-10-13 02:26:05.637269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.080 [2024-10-13 02:26:05.637349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.080 pt2 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:47.080 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.081 malloc3 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.081 [2024-10-13 02:26:05.669400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:47.081 [2024-10-13 02:26:05.669494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.081 [2024-10-13 02:26:05.669527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:47.081 [2024-10-13 02:26:05.669556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.081 [2024-10-13 02:26:05.671976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.081 [2024-10-13 02:26:05.672046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:47.081 pt3 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.081 malloc4 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.081 [2024-10-13 02:26:05.708193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:47.081 [2024-10-13 02:26:05.708279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.081 [2024-10-13 02:26:05.708328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:47.081 [2024-10-13 02:26:05.708361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.081 [2024-10-13 02:26:05.710672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.081 [2024-10-13 02:26:05.710755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:47.081 pt4 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.081 [2024-10-13 02:26:05.720222] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:47.081 [2024-10-13 02:26:05.722284] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.081 [2024-10-13 02:26:05.722357] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:47.081 [2024-10-13 02:26:05.722401] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:47.081 [2024-10-13 02:26:05.722565] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:47.081 [2024-10-13 02:26:05.722579] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:47.081 [2024-10-13 02:26:05.722856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:11:47.081 [2024-10-13 02:26:05.723028] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:47.081 [2024-10-13 02:26:05.723039] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:47.081 [2024-10-13 02:26:05.723154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.081 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.342 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.342 "name": "raid_bdev1", 00:11:47.342 "uuid": "87f5fcb7-be45-49cc-b225-91ad2cf504da", 00:11:47.342 "strip_size_kb": 0, 00:11:47.342 "state": "online", 00:11:47.342 "raid_level": "raid1", 00:11:47.342 "superblock": true, 00:11:47.342 "num_base_bdevs": 4, 00:11:47.342 "num_base_bdevs_discovered": 4, 00:11:47.342 "num_base_bdevs_operational": 4, 00:11:47.342 "base_bdevs_list": [ 00:11:47.342 { 00:11:47.342 "name": "pt1", 00:11:47.342 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.342 "is_configured": true, 00:11:47.342 "data_offset": 2048, 00:11:47.342 "data_size": 63488 00:11:47.342 }, 00:11:47.342 { 00:11:47.342 "name": "pt2", 00:11:47.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.342 "is_configured": true, 00:11:47.342 "data_offset": 2048, 00:11:47.342 "data_size": 63488 00:11:47.342 }, 00:11:47.342 { 00:11:47.342 "name": "pt3", 00:11:47.342 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.342 "is_configured": true, 00:11:47.342 "data_offset": 2048, 00:11:47.342 "data_size": 63488 00:11:47.342 }, 00:11:47.342 { 00:11:47.342 "name": "pt4", 00:11:47.342 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.342 "is_configured": true, 00:11:47.342 "data_offset": 2048, 00:11:47.342 "data_size": 63488 00:11:47.342 } 00:11:47.342 ] 00:11:47.342 }' 00:11:47.342 02:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.342 02:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.602 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:47.602 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:47.602 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:47.602 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:47.602 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:47.603 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:47.603 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:47.603 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:47.603 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.603 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.603 [2024-10-13 02:26:06.199751] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.603 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.603 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:47.603 "name": "raid_bdev1", 00:11:47.603 "aliases": [ 00:11:47.603 "87f5fcb7-be45-49cc-b225-91ad2cf504da" 00:11:47.603 ], 00:11:47.603 "product_name": "Raid Volume", 00:11:47.603 "block_size": 512, 00:11:47.603 "num_blocks": 63488, 00:11:47.603 "uuid": "87f5fcb7-be45-49cc-b225-91ad2cf504da", 00:11:47.603 "assigned_rate_limits": { 00:11:47.603 "rw_ios_per_sec": 0, 00:11:47.603 "rw_mbytes_per_sec": 0, 00:11:47.603 "r_mbytes_per_sec": 0, 00:11:47.603 "w_mbytes_per_sec": 0 00:11:47.603 }, 00:11:47.603 "claimed": false, 00:11:47.603 "zoned": false, 00:11:47.603 "supported_io_types": { 00:11:47.603 "read": true, 00:11:47.603 "write": true, 00:11:47.603 "unmap": false, 00:11:47.603 "flush": false, 00:11:47.603 "reset": true, 00:11:47.603 "nvme_admin": false, 00:11:47.603 "nvme_io": false, 00:11:47.603 "nvme_io_md": false, 00:11:47.603 "write_zeroes": true, 00:11:47.603 "zcopy": false, 00:11:47.603 "get_zone_info": false, 00:11:47.603 "zone_management": false, 00:11:47.603 "zone_append": false, 00:11:47.603 "compare": false, 00:11:47.603 "compare_and_write": false, 00:11:47.603 "abort": false, 00:11:47.603 "seek_hole": false, 00:11:47.603 "seek_data": false, 00:11:47.603 "copy": false, 00:11:47.603 "nvme_iov_md": false 00:11:47.603 }, 00:11:47.603 "memory_domains": [ 00:11:47.603 { 00:11:47.603 "dma_device_id": "system", 00:11:47.603 "dma_device_type": 1 00:11:47.603 }, 00:11:47.603 { 00:11:47.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.603 "dma_device_type": 2 00:11:47.603 }, 00:11:47.603 { 00:11:47.603 "dma_device_id": "system", 00:11:47.603 "dma_device_type": 1 00:11:47.603 }, 00:11:47.603 { 00:11:47.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.603 "dma_device_type": 2 00:11:47.603 }, 00:11:47.603 { 00:11:47.603 "dma_device_id": "system", 00:11:47.603 "dma_device_type": 1 00:11:47.603 }, 00:11:47.603 { 00:11:47.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.603 "dma_device_type": 2 00:11:47.603 }, 00:11:47.603 { 00:11:47.603 "dma_device_id": "system", 00:11:47.603 "dma_device_type": 1 00:11:47.603 }, 00:11:47.603 { 00:11:47.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.603 "dma_device_type": 2 00:11:47.603 } 00:11:47.603 ], 00:11:47.603 "driver_specific": { 00:11:47.603 "raid": { 00:11:47.603 "uuid": "87f5fcb7-be45-49cc-b225-91ad2cf504da", 00:11:47.603 "strip_size_kb": 0, 00:11:47.603 "state": "online", 00:11:47.603 "raid_level": "raid1", 00:11:47.603 "superblock": true, 00:11:47.603 "num_base_bdevs": 4, 00:11:47.603 "num_base_bdevs_discovered": 4, 00:11:47.603 "num_base_bdevs_operational": 4, 00:11:47.603 "base_bdevs_list": [ 00:11:47.603 { 00:11:47.603 "name": "pt1", 00:11:47.603 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.603 "is_configured": true, 00:11:47.603 "data_offset": 2048, 00:11:47.603 "data_size": 63488 00:11:47.603 }, 00:11:47.603 { 00:11:47.603 "name": "pt2", 00:11:47.603 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.603 "is_configured": true, 00:11:47.603 "data_offset": 2048, 00:11:47.603 "data_size": 63488 00:11:47.603 }, 00:11:47.603 { 00:11:47.603 "name": "pt3", 00:11:47.603 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.603 "is_configured": true, 00:11:47.603 "data_offset": 2048, 00:11:47.603 "data_size": 63488 00:11:47.603 }, 00:11:47.603 { 00:11:47.603 "name": "pt4", 00:11:47.603 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.603 "is_configured": true, 00:11:47.603 "data_offset": 2048, 00:11:47.603 "data_size": 63488 00:11:47.603 } 00:11:47.603 ] 00:11:47.603 } 00:11:47.603 } 00:11:47.603 }' 00:11:47.603 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:47.603 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:47.603 pt2 00:11:47.603 pt3 00:11:47.603 pt4' 00:11:47.603 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.864 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.864 [2024-10-13 02:26:06.531261] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=87f5fcb7-be45-49cc-b225-91ad2cf504da 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 87f5fcb7-be45-49cc-b225-91ad2cf504da ']' 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.125 [2024-10-13 02:26:06.586870] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.125 [2024-10-13 02:26:06.586962] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.125 [2024-10-13 02:26:06.587127] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.125 [2024-10-13 02:26:06.587292] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.125 [2024-10-13 02:26:06.587341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:48.125 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:48.126 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:48.126 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:48.126 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:48.126 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:48.126 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:48.126 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.126 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.126 [2024-10-13 02:26:06.750504] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:48.126 [2024-10-13 02:26:06.752690] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:48.126 [2024-10-13 02:26:06.752740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:48.126 [2024-10-13 02:26:06.752775] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:48.126 [2024-10-13 02:26:06.752828] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:48.126 [2024-10-13 02:26:06.752887] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:48.126 [2024-10-13 02:26:06.752908] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:48.126 [2024-10-13 02:26:06.752925] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:48.126 [2024-10-13 02:26:06.752939] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.126 [2024-10-13 02:26:06.752958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:11:48.126 request: 00:11:48.126 { 00:11:48.126 "name": "raid_bdev1", 00:11:48.126 "raid_level": "raid1", 00:11:48.126 "base_bdevs": [ 00:11:48.126 "malloc1", 00:11:48.126 "malloc2", 00:11:48.126 "malloc3", 00:11:48.126 "malloc4" 00:11:48.126 ], 00:11:48.126 "superblock": false, 00:11:48.126 "method": "bdev_raid_create", 00:11:48.126 "req_id": 1 00:11:48.126 } 00:11:48.126 Got JSON-RPC error response 00:11:48.126 response: 00:11:48.126 { 00:11:48.126 "code": -17, 00:11:48.126 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:48.126 } 00:11:48.126 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:48.126 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:48.126 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:48.126 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:48.126 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:48.126 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.126 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.126 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:48.126 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.126 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.385 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:48.385 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:48.385 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:48.385 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.385 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.385 [2024-10-13 02:26:06.818407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:48.385 [2024-10-13 02:26:06.818563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.386 [2024-10-13 02:26:06.818624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:48.386 [2024-10-13 02:26:06.818659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.386 [2024-10-13 02:26:06.821431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.386 [2024-10-13 02:26:06.821511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:48.386 [2024-10-13 02:26:06.821677] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:48.386 [2024-10-13 02:26:06.821767] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:48.386 pt1 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.386 "name": "raid_bdev1", 00:11:48.386 "uuid": "87f5fcb7-be45-49cc-b225-91ad2cf504da", 00:11:48.386 "strip_size_kb": 0, 00:11:48.386 "state": "configuring", 00:11:48.386 "raid_level": "raid1", 00:11:48.386 "superblock": true, 00:11:48.386 "num_base_bdevs": 4, 00:11:48.386 "num_base_bdevs_discovered": 1, 00:11:48.386 "num_base_bdevs_operational": 4, 00:11:48.386 "base_bdevs_list": [ 00:11:48.386 { 00:11:48.386 "name": "pt1", 00:11:48.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.386 "is_configured": true, 00:11:48.386 "data_offset": 2048, 00:11:48.386 "data_size": 63488 00:11:48.386 }, 00:11:48.386 { 00:11:48.386 "name": null, 00:11:48.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.386 "is_configured": false, 00:11:48.386 "data_offset": 2048, 00:11:48.386 "data_size": 63488 00:11:48.386 }, 00:11:48.386 { 00:11:48.386 "name": null, 00:11:48.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.386 "is_configured": false, 00:11:48.386 "data_offset": 2048, 00:11:48.386 "data_size": 63488 00:11:48.386 }, 00:11:48.386 { 00:11:48.386 "name": null, 00:11:48.386 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.386 "is_configured": false, 00:11:48.386 "data_offset": 2048, 00:11:48.386 "data_size": 63488 00:11:48.386 } 00:11:48.386 ] 00:11:48.386 }' 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.386 02:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.646 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:48.646 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:48.646 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.646 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.646 [2024-10-13 02:26:07.261633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:48.646 [2024-10-13 02:26:07.261758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.646 [2024-10-13 02:26:07.261806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:48.646 [2024-10-13 02:26:07.261837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.646 [2024-10-13 02:26:07.262363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.646 [2024-10-13 02:26:07.262421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:48.646 [2024-10-13 02:26:07.262547] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:48.646 [2024-10-13 02:26:07.262611] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:48.646 pt2 00:11:48.646 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.647 [2024-10-13 02:26:07.273619] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.647 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.907 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.907 "name": "raid_bdev1", 00:11:48.907 "uuid": "87f5fcb7-be45-49cc-b225-91ad2cf504da", 00:11:48.907 "strip_size_kb": 0, 00:11:48.907 "state": "configuring", 00:11:48.907 "raid_level": "raid1", 00:11:48.907 "superblock": true, 00:11:48.907 "num_base_bdevs": 4, 00:11:48.907 "num_base_bdevs_discovered": 1, 00:11:48.907 "num_base_bdevs_operational": 4, 00:11:48.907 "base_bdevs_list": [ 00:11:48.907 { 00:11:48.907 "name": "pt1", 00:11:48.907 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.907 "is_configured": true, 00:11:48.907 "data_offset": 2048, 00:11:48.907 "data_size": 63488 00:11:48.907 }, 00:11:48.907 { 00:11:48.907 "name": null, 00:11:48.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.907 "is_configured": false, 00:11:48.907 "data_offset": 0, 00:11:48.907 "data_size": 63488 00:11:48.907 }, 00:11:48.907 { 00:11:48.907 "name": null, 00:11:48.907 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.907 "is_configured": false, 00:11:48.907 "data_offset": 2048, 00:11:48.907 "data_size": 63488 00:11:48.907 }, 00:11:48.907 { 00:11:48.907 "name": null, 00:11:48.907 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.907 "is_configured": false, 00:11:48.907 "data_offset": 2048, 00:11:48.907 "data_size": 63488 00:11:48.907 } 00:11:48.907 ] 00:11:48.907 }' 00:11:48.907 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.907 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.168 [2024-10-13 02:26:07.688966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:49.168 [2024-10-13 02:26:07.689095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.168 [2024-10-13 02:26:07.689148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:49.168 [2024-10-13 02:26:07.689186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.168 [2024-10-13 02:26:07.689705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.168 [2024-10-13 02:26:07.689766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:49.168 [2024-10-13 02:26:07.689898] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:49.168 [2024-10-13 02:26:07.689955] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:49.168 pt2 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.168 [2024-10-13 02:26:07.700861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:49.168 [2024-10-13 02:26:07.700925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.168 [2024-10-13 02:26:07.700951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:49.168 [2024-10-13 02:26:07.700962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.168 [2024-10-13 02:26:07.701315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.168 [2024-10-13 02:26:07.701333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:49.168 [2024-10-13 02:26:07.701396] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:49.168 [2024-10-13 02:26:07.701418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:49.168 pt3 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.168 [2024-10-13 02:26:07.712858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:49.168 [2024-10-13 02:26:07.712979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.168 [2024-10-13 02:26:07.713000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:49.168 [2024-10-13 02:26:07.713011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.168 [2024-10-13 02:26:07.713327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.168 [2024-10-13 02:26:07.713344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:49.168 [2024-10-13 02:26:07.713398] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:49.168 [2024-10-13 02:26:07.713418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:49.168 [2024-10-13 02:26:07.713519] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:11:49.168 [2024-10-13 02:26:07.713530] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:49.168 [2024-10-13 02:26:07.713765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:49.168 [2024-10-13 02:26:07.713914] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:11:49.168 [2024-10-13 02:26:07.713929] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:11:49.168 [2024-10-13 02:26:07.714040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.168 pt4 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.168 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.169 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.169 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.169 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.169 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.169 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.169 "name": "raid_bdev1", 00:11:49.169 "uuid": "87f5fcb7-be45-49cc-b225-91ad2cf504da", 00:11:49.169 "strip_size_kb": 0, 00:11:49.169 "state": "online", 00:11:49.169 "raid_level": "raid1", 00:11:49.169 "superblock": true, 00:11:49.169 "num_base_bdevs": 4, 00:11:49.169 "num_base_bdevs_discovered": 4, 00:11:49.169 "num_base_bdevs_operational": 4, 00:11:49.169 "base_bdevs_list": [ 00:11:49.169 { 00:11:49.169 "name": "pt1", 00:11:49.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:49.169 "is_configured": true, 00:11:49.169 "data_offset": 2048, 00:11:49.169 "data_size": 63488 00:11:49.169 }, 00:11:49.169 { 00:11:49.169 "name": "pt2", 00:11:49.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.169 "is_configured": true, 00:11:49.169 "data_offset": 2048, 00:11:49.169 "data_size": 63488 00:11:49.169 }, 00:11:49.169 { 00:11:49.169 "name": "pt3", 00:11:49.169 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.169 "is_configured": true, 00:11:49.169 "data_offset": 2048, 00:11:49.169 "data_size": 63488 00:11:49.169 }, 00:11:49.169 { 00:11:49.169 "name": "pt4", 00:11:49.169 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:49.169 "is_configured": true, 00:11:49.169 "data_offset": 2048, 00:11:49.169 "data_size": 63488 00:11:49.169 } 00:11:49.169 ] 00:11:49.169 }' 00:11:49.169 02:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.169 02:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.745 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:49.745 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:49.745 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:49.745 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:49.745 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:49.745 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:49.745 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:49.745 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:49.745 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.745 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.745 [2024-10-13 02:26:08.188401] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.745 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.745 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:49.745 "name": "raid_bdev1", 00:11:49.745 "aliases": [ 00:11:49.745 "87f5fcb7-be45-49cc-b225-91ad2cf504da" 00:11:49.745 ], 00:11:49.745 "product_name": "Raid Volume", 00:11:49.745 "block_size": 512, 00:11:49.745 "num_blocks": 63488, 00:11:49.745 "uuid": "87f5fcb7-be45-49cc-b225-91ad2cf504da", 00:11:49.745 "assigned_rate_limits": { 00:11:49.745 "rw_ios_per_sec": 0, 00:11:49.745 "rw_mbytes_per_sec": 0, 00:11:49.745 "r_mbytes_per_sec": 0, 00:11:49.745 "w_mbytes_per_sec": 0 00:11:49.745 }, 00:11:49.745 "claimed": false, 00:11:49.745 "zoned": false, 00:11:49.745 "supported_io_types": { 00:11:49.745 "read": true, 00:11:49.746 "write": true, 00:11:49.746 "unmap": false, 00:11:49.746 "flush": false, 00:11:49.746 "reset": true, 00:11:49.746 "nvme_admin": false, 00:11:49.746 "nvme_io": false, 00:11:49.746 "nvme_io_md": false, 00:11:49.746 "write_zeroes": true, 00:11:49.746 "zcopy": false, 00:11:49.746 "get_zone_info": false, 00:11:49.746 "zone_management": false, 00:11:49.746 "zone_append": false, 00:11:49.746 "compare": false, 00:11:49.746 "compare_and_write": false, 00:11:49.746 "abort": false, 00:11:49.746 "seek_hole": false, 00:11:49.746 "seek_data": false, 00:11:49.746 "copy": false, 00:11:49.746 "nvme_iov_md": false 00:11:49.746 }, 00:11:49.746 "memory_domains": [ 00:11:49.746 { 00:11:49.746 "dma_device_id": "system", 00:11:49.746 "dma_device_type": 1 00:11:49.746 }, 00:11:49.746 { 00:11:49.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.746 "dma_device_type": 2 00:11:49.746 }, 00:11:49.746 { 00:11:49.746 "dma_device_id": "system", 00:11:49.746 "dma_device_type": 1 00:11:49.746 }, 00:11:49.746 { 00:11:49.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.746 "dma_device_type": 2 00:11:49.746 }, 00:11:49.746 { 00:11:49.746 "dma_device_id": "system", 00:11:49.746 "dma_device_type": 1 00:11:49.746 }, 00:11:49.746 { 00:11:49.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.746 "dma_device_type": 2 00:11:49.746 }, 00:11:49.746 { 00:11:49.746 "dma_device_id": "system", 00:11:49.746 "dma_device_type": 1 00:11:49.746 }, 00:11:49.746 { 00:11:49.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.746 "dma_device_type": 2 00:11:49.746 } 00:11:49.746 ], 00:11:49.746 "driver_specific": { 00:11:49.746 "raid": { 00:11:49.746 "uuid": "87f5fcb7-be45-49cc-b225-91ad2cf504da", 00:11:49.746 "strip_size_kb": 0, 00:11:49.746 "state": "online", 00:11:49.746 "raid_level": "raid1", 00:11:49.746 "superblock": true, 00:11:49.746 "num_base_bdevs": 4, 00:11:49.746 "num_base_bdevs_discovered": 4, 00:11:49.746 "num_base_bdevs_operational": 4, 00:11:49.746 "base_bdevs_list": [ 00:11:49.746 { 00:11:49.746 "name": "pt1", 00:11:49.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:49.746 "is_configured": true, 00:11:49.746 "data_offset": 2048, 00:11:49.746 "data_size": 63488 00:11:49.746 }, 00:11:49.746 { 00:11:49.746 "name": "pt2", 00:11:49.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.746 "is_configured": true, 00:11:49.746 "data_offset": 2048, 00:11:49.746 "data_size": 63488 00:11:49.746 }, 00:11:49.746 { 00:11:49.746 "name": "pt3", 00:11:49.746 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.746 "is_configured": true, 00:11:49.746 "data_offset": 2048, 00:11:49.746 "data_size": 63488 00:11:49.746 }, 00:11:49.746 { 00:11:49.746 "name": "pt4", 00:11:49.746 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:49.746 "is_configured": true, 00:11:49.746 "data_offset": 2048, 00:11:49.746 "data_size": 63488 00:11:49.746 } 00:11:49.746 ] 00:11:49.746 } 00:11:49.746 } 00:11:49.746 }' 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:49.746 pt2 00:11:49.746 pt3 00:11:49.746 pt4' 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.746 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.007 [2024-10-13 02:26:08.491805] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 87f5fcb7-be45-49cc-b225-91ad2cf504da '!=' 87f5fcb7-be45-49cc-b225-91ad2cf504da ']' 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.007 [2024-10-13 02:26:08.527486] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.007 "name": "raid_bdev1", 00:11:50.007 "uuid": "87f5fcb7-be45-49cc-b225-91ad2cf504da", 00:11:50.007 "strip_size_kb": 0, 00:11:50.007 "state": "online", 00:11:50.007 "raid_level": "raid1", 00:11:50.007 "superblock": true, 00:11:50.007 "num_base_bdevs": 4, 00:11:50.007 "num_base_bdevs_discovered": 3, 00:11:50.007 "num_base_bdevs_operational": 3, 00:11:50.007 "base_bdevs_list": [ 00:11:50.007 { 00:11:50.007 "name": null, 00:11:50.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.007 "is_configured": false, 00:11:50.007 "data_offset": 0, 00:11:50.007 "data_size": 63488 00:11:50.007 }, 00:11:50.007 { 00:11:50.007 "name": "pt2", 00:11:50.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.007 "is_configured": true, 00:11:50.007 "data_offset": 2048, 00:11:50.007 "data_size": 63488 00:11:50.007 }, 00:11:50.007 { 00:11:50.007 "name": "pt3", 00:11:50.007 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.007 "is_configured": true, 00:11:50.007 "data_offset": 2048, 00:11:50.007 "data_size": 63488 00:11:50.007 }, 00:11:50.007 { 00:11:50.007 "name": "pt4", 00:11:50.007 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:50.007 "is_configured": true, 00:11:50.007 "data_offset": 2048, 00:11:50.007 "data_size": 63488 00:11:50.007 } 00:11:50.007 ] 00:11:50.007 }' 00:11:50.007 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.008 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.579 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:50.579 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.579 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.579 [2024-10-13 02:26:08.962760] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:50.579 [2024-10-13 02:26:08.962844] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:50.579 [2024-10-13 02:26:08.962978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.579 [2024-10-13 02:26:08.963082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:50.579 [2024-10-13 02:26:08.963137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:11:50.579 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.579 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.579 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.579 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.579 02:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:50.579 02:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.579 [2024-10-13 02:26:09.062536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:50.579 [2024-10-13 02:26:09.062647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.579 [2024-10-13 02:26:09.062682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:50.579 [2024-10-13 02:26:09.062715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.579 [2024-10-13 02:26:09.065255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.579 [2024-10-13 02:26:09.065342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:50.579 [2024-10-13 02:26:09.065444] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:50.579 [2024-10-13 02:26:09.065504] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:50.579 pt2 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.579 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.580 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.580 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.580 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.580 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.580 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.580 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.580 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.580 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.580 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.580 "name": "raid_bdev1", 00:11:50.580 "uuid": "87f5fcb7-be45-49cc-b225-91ad2cf504da", 00:11:50.580 "strip_size_kb": 0, 00:11:50.580 "state": "configuring", 00:11:50.580 "raid_level": "raid1", 00:11:50.580 "superblock": true, 00:11:50.580 "num_base_bdevs": 4, 00:11:50.580 "num_base_bdevs_discovered": 1, 00:11:50.580 "num_base_bdevs_operational": 3, 00:11:50.580 "base_bdevs_list": [ 00:11:50.580 { 00:11:50.580 "name": null, 00:11:50.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.580 "is_configured": false, 00:11:50.580 "data_offset": 2048, 00:11:50.580 "data_size": 63488 00:11:50.580 }, 00:11:50.580 { 00:11:50.580 "name": "pt2", 00:11:50.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.580 "is_configured": true, 00:11:50.580 "data_offset": 2048, 00:11:50.580 "data_size": 63488 00:11:50.580 }, 00:11:50.580 { 00:11:50.580 "name": null, 00:11:50.580 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.580 "is_configured": false, 00:11:50.580 "data_offset": 2048, 00:11:50.580 "data_size": 63488 00:11:50.580 }, 00:11:50.580 { 00:11:50.580 "name": null, 00:11:50.580 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:50.580 "is_configured": false, 00:11:50.580 "data_offset": 2048, 00:11:50.580 "data_size": 63488 00:11:50.580 } 00:11:50.580 ] 00:11:50.580 }' 00:11:50.580 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.580 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.840 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:50.840 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:50.840 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:50.840 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.840 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.840 [2024-10-13 02:26:09.505841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:50.840 [2024-10-13 02:26:09.505931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.840 [2024-10-13 02:26:09.505954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:50.840 [2024-10-13 02:26:09.505969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.840 [2024-10-13 02:26:09.506480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.840 [2024-10-13 02:26:09.506502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:50.840 [2024-10-13 02:26:09.506590] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:50.840 [2024-10-13 02:26:09.506626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:50.840 pt3 00:11:50.840 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.840 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:50.841 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.841 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.841 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.841 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.841 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.841 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.841 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.841 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.841 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.841 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.841 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.841 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.841 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.101 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.101 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.101 "name": "raid_bdev1", 00:11:51.101 "uuid": "87f5fcb7-be45-49cc-b225-91ad2cf504da", 00:11:51.101 "strip_size_kb": 0, 00:11:51.101 "state": "configuring", 00:11:51.101 "raid_level": "raid1", 00:11:51.101 "superblock": true, 00:11:51.101 "num_base_bdevs": 4, 00:11:51.101 "num_base_bdevs_discovered": 2, 00:11:51.101 "num_base_bdevs_operational": 3, 00:11:51.101 "base_bdevs_list": [ 00:11:51.101 { 00:11:51.101 "name": null, 00:11:51.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.101 "is_configured": false, 00:11:51.101 "data_offset": 2048, 00:11:51.101 "data_size": 63488 00:11:51.101 }, 00:11:51.101 { 00:11:51.101 "name": "pt2", 00:11:51.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.101 "is_configured": true, 00:11:51.101 "data_offset": 2048, 00:11:51.101 "data_size": 63488 00:11:51.101 }, 00:11:51.101 { 00:11:51.101 "name": "pt3", 00:11:51.101 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.101 "is_configured": true, 00:11:51.101 "data_offset": 2048, 00:11:51.101 "data_size": 63488 00:11:51.101 }, 00:11:51.101 { 00:11:51.101 "name": null, 00:11:51.101 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:51.101 "is_configured": false, 00:11:51.101 "data_offset": 2048, 00:11:51.101 "data_size": 63488 00:11:51.101 } 00:11:51.101 ] 00:11:51.101 }' 00:11:51.101 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.101 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.362 [2024-10-13 02:26:09.949053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:51.362 [2024-10-13 02:26:09.949188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.362 [2024-10-13 02:26:09.949231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:51.362 [2024-10-13 02:26:09.949266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.362 [2024-10-13 02:26:09.949801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.362 [2024-10-13 02:26:09.949880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:51.362 [2024-10-13 02:26:09.950010] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:51.362 [2024-10-13 02:26:09.950067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:51.362 [2024-10-13 02:26:09.950210] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:11:51.362 [2024-10-13 02:26:09.950248] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:51.362 [2024-10-13 02:26:09.950526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:51.362 [2024-10-13 02:26:09.950698] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:11:51.362 [2024-10-13 02:26:09.950738] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:11:51.362 [2024-10-13 02:26:09.950933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.362 pt4 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.362 02:26:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.362 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.362 "name": "raid_bdev1", 00:11:51.362 "uuid": "87f5fcb7-be45-49cc-b225-91ad2cf504da", 00:11:51.362 "strip_size_kb": 0, 00:11:51.362 "state": "online", 00:11:51.362 "raid_level": "raid1", 00:11:51.362 "superblock": true, 00:11:51.362 "num_base_bdevs": 4, 00:11:51.362 "num_base_bdevs_discovered": 3, 00:11:51.362 "num_base_bdevs_operational": 3, 00:11:51.362 "base_bdevs_list": [ 00:11:51.362 { 00:11:51.362 "name": null, 00:11:51.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.362 "is_configured": false, 00:11:51.362 "data_offset": 2048, 00:11:51.362 "data_size": 63488 00:11:51.362 }, 00:11:51.362 { 00:11:51.362 "name": "pt2", 00:11:51.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.362 "is_configured": true, 00:11:51.362 "data_offset": 2048, 00:11:51.362 "data_size": 63488 00:11:51.362 }, 00:11:51.362 { 00:11:51.362 "name": "pt3", 00:11:51.362 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.362 "is_configured": true, 00:11:51.362 "data_offset": 2048, 00:11:51.362 "data_size": 63488 00:11:51.362 }, 00:11:51.362 { 00:11:51.362 "name": "pt4", 00:11:51.362 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:51.362 "is_configured": true, 00:11:51.362 "data_offset": 2048, 00:11:51.362 "data_size": 63488 00:11:51.362 } 00:11:51.362 ] 00:11:51.362 }' 00:11:51.362 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.362 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.934 [2024-10-13 02:26:10.396285] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:51.934 [2024-10-13 02:26:10.396366] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.934 [2024-10-13 02:26:10.396458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.934 [2024-10-13 02:26:10.396546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.934 [2024-10-13 02:26:10.396557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.934 [2024-10-13 02:26:10.468129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:51.934 [2024-10-13 02:26:10.468197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.934 [2024-10-13 02:26:10.468222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:51.934 [2024-10-13 02:26:10.468231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.934 [2024-10-13 02:26:10.470800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.934 [2024-10-13 02:26:10.470901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:51.934 [2024-10-13 02:26:10.471009] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:51.934 [2024-10-13 02:26:10.471064] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:51.934 [2024-10-13 02:26:10.471205] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:51.934 [2024-10-13 02:26:10.471220] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:51.934 [2024-10-13 02:26:10.471241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:11:51.934 [2024-10-13 02:26:10.471283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:51.934 [2024-10-13 02:26:10.471404] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:51.934 pt1 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.934 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.934 "name": "raid_bdev1", 00:11:51.934 "uuid": "87f5fcb7-be45-49cc-b225-91ad2cf504da", 00:11:51.934 "strip_size_kb": 0, 00:11:51.934 "state": "configuring", 00:11:51.934 "raid_level": "raid1", 00:11:51.934 "superblock": true, 00:11:51.934 "num_base_bdevs": 4, 00:11:51.934 "num_base_bdevs_discovered": 2, 00:11:51.934 "num_base_bdevs_operational": 3, 00:11:51.934 "base_bdevs_list": [ 00:11:51.934 { 00:11:51.934 "name": null, 00:11:51.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.934 "is_configured": false, 00:11:51.934 "data_offset": 2048, 00:11:51.934 "data_size": 63488 00:11:51.934 }, 00:11:51.934 { 00:11:51.935 "name": "pt2", 00:11:51.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.935 "is_configured": true, 00:11:51.935 "data_offset": 2048, 00:11:51.935 "data_size": 63488 00:11:51.935 }, 00:11:51.935 { 00:11:51.935 "name": "pt3", 00:11:51.935 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.935 "is_configured": true, 00:11:51.935 "data_offset": 2048, 00:11:51.935 "data_size": 63488 00:11:51.935 }, 00:11:51.935 { 00:11:51.935 "name": null, 00:11:51.935 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:51.935 "is_configured": false, 00:11:51.935 "data_offset": 2048, 00:11:51.935 "data_size": 63488 00:11:51.935 } 00:11:51.935 ] 00:11:51.935 }' 00:11:51.935 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.935 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.505 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.506 [2024-10-13 02:26:10.983265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:52.506 [2024-10-13 02:26:10.983379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.506 [2024-10-13 02:26:10.983407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:52.506 [2024-10-13 02:26:10.983419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.506 [2024-10-13 02:26:10.983905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.506 [2024-10-13 02:26:10.983928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:52.506 [2024-10-13 02:26:10.984009] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:52.506 [2024-10-13 02:26:10.984036] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:52.506 [2024-10-13 02:26:10.984164] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:11:52.506 [2024-10-13 02:26:10.984179] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:52.506 [2024-10-13 02:26:10.984435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:52.506 [2024-10-13 02:26:10.984567] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:11:52.506 [2024-10-13 02:26:10.984576] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:11:52.506 [2024-10-13 02:26:10.984695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.506 pt4 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.506 02:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.506 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.506 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.506 "name": "raid_bdev1", 00:11:52.506 "uuid": "87f5fcb7-be45-49cc-b225-91ad2cf504da", 00:11:52.506 "strip_size_kb": 0, 00:11:52.506 "state": "online", 00:11:52.506 "raid_level": "raid1", 00:11:52.506 "superblock": true, 00:11:52.506 "num_base_bdevs": 4, 00:11:52.506 "num_base_bdevs_discovered": 3, 00:11:52.506 "num_base_bdevs_operational": 3, 00:11:52.506 "base_bdevs_list": [ 00:11:52.506 { 00:11:52.506 "name": null, 00:11:52.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.506 "is_configured": false, 00:11:52.506 "data_offset": 2048, 00:11:52.506 "data_size": 63488 00:11:52.506 }, 00:11:52.506 { 00:11:52.506 "name": "pt2", 00:11:52.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:52.506 "is_configured": true, 00:11:52.506 "data_offset": 2048, 00:11:52.506 "data_size": 63488 00:11:52.506 }, 00:11:52.506 { 00:11:52.506 "name": "pt3", 00:11:52.506 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:52.506 "is_configured": true, 00:11:52.506 "data_offset": 2048, 00:11:52.506 "data_size": 63488 00:11:52.506 }, 00:11:52.506 { 00:11:52.506 "name": "pt4", 00:11:52.506 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:52.506 "is_configured": true, 00:11:52.506 "data_offset": 2048, 00:11:52.506 "data_size": 63488 00:11:52.506 } 00:11:52.506 ] 00:11:52.506 }' 00:11:52.506 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.506 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.795 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:52.795 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.795 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.795 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:52.795 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.795 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:52.795 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:52.795 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:52.795 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.795 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.795 [2024-10-13 02:26:11.430769] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.795 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.089 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 87f5fcb7-be45-49cc-b225-91ad2cf504da '!=' 87f5fcb7-be45-49cc-b225-91ad2cf504da ']' 00:11:53.089 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85205 00:11:53.089 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 85205 ']' 00:11:53.089 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 85205 00:11:53.089 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:53.089 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:53.089 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85205 00:11:53.089 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:53.089 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:53.089 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85205' 00:11:53.089 killing process with pid 85205 00:11:53.089 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 85205 00:11:53.089 [2024-10-13 02:26:11.519170] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.089 [2024-10-13 02:26:11.519268] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.089 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 85205 00:11:53.089 [2024-10-13 02:26:11.519360] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.089 [2024-10-13 02:26:11.519370] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:11:53.089 [2024-10-13 02:26:11.599674] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:53.349 02:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:53.349 00:11:53.349 real 0m7.334s 00:11:53.349 user 0m12.079s 00:11:53.349 sys 0m1.653s 00:11:53.349 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.349 ************************************ 00:11:53.349 END TEST raid_superblock_test 00:11:53.349 ************************************ 00:11:53.349 02:26:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.349 02:26:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:53.349 02:26:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:53.349 02:26:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.349 02:26:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:53.610 ************************************ 00:11:53.610 START TEST raid_read_error_test 00:11:53.610 ************************************ 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:53.610 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:53.611 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:53.611 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hok0gIQJnl 00:11:53.611 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85682 00:11:53.611 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:53.611 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85682 00:11:53.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.611 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 85682 ']' 00:11:53.611 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.611 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:53.611 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.611 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:53.611 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.611 [2024-10-13 02:26:12.145921] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:53.611 [2024-10-13 02:26:12.146086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85682 ] 00:11:53.871 [2024-10-13 02:26:12.294296] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.871 [2024-10-13 02:26:12.363524] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.871 [2024-10-13 02:26:12.441710] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.871 [2024-10-13 02:26:12.441757] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.442 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:54.442 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:54.442 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:54.442 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:54.442 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.442 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.442 BaseBdev1_malloc 00:11:54.442 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.442 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:54.442 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.442 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.442 true 00:11:54.442 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.442 02:26:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:54.442 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.442 02:26:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.442 [2024-10-13 02:26:13.001665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:54.442 [2024-10-13 02:26:13.001728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.442 [2024-10-13 02:26:13.001752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:11:54.442 [2024-10-13 02:26:13.001762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.442 [2024-10-13 02:26:13.004226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.442 [2024-10-13 02:26:13.004264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:54.442 BaseBdev1 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.442 BaseBdev2_malloc 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.442 true 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.442 [2024-10-13 02:26:13.063230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:54.442 [2024-10-13 02:26:13.063290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.442 [2024-10-13 02:26:13.063311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:11:54.442 [2024-10-13 02:26:13.063320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.442 [2024-10-13 02:26:13.065708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.442 [2024-10-13 02:26:13.065795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:54.442 BaseBdev2 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.442 BaseBdev3_malloc 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.442 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.443 true 00:11:54.443 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.443 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:54.443 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.443 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.443 [2024-10-13 02:26:13.109808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:54.443 [2024-10-13 02:26:13.109854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.443 [2024-10-13 02:26:13.109900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:11:54.443 [2024-10-13 02:26:13.109910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.443 [2024-10-13 02:26:13.112296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.443 [2024-10-13 02:26:13.112364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:54.443 BaseBdev3 00:11:54.443 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.443 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:54.443 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:54.443 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.443 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.703 BaseBdev4_malloc 00:11:54.703 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.703 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:54.703 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.703 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.703 true 00:11:54.703 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.703 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:54.703 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.703 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.703 [2024-10-13 02:26:13.156523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:54.704 [2024-10-13 02:26:13.156570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.704 [2024-10-13 02:26:13.156607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:54.704 [2024-10-13 02:26:13.156616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.704 [2024-10-13 02:26:13.159225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.704 [2024-10-13 02:26:13.159260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:54.704 BaseBdev4 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.704 [2024-10-13 02:26:13.168562] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.704 [2024-10-13 02:26:13.170683] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.704 [2024-10-13 02:26:13.170756] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.704 [2024-10-13 02:26:13.170857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:54.704 [2024-10-13 02:26:13.171082] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:11:54.704 [2024-10-13 02:26:13.171094] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:54.704 [2024-10-13 02:26:13.171348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:11:54.704 [2024-10-13 02:26:13.171511] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:11:54.704 [2024-10-13 02:26:13.171525] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:11:54.704 [2024-10-13 02:26:13.171653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.704 "name": "raid_bdev1", 00:11:54.704 "uuid": "1da11e41-d751-40ee-a7b9-4e250fd4ef6c", 00:11:54.704 "strip_size_kb": 0, 00:11:54.704 "state": "online", 00:11:54.704 "raid_level": "raid1", 00:11:54.704 "superblock": true, 00:11:54.704 "num_base_bdevs": 4, 00:11:54.704 "num_base_bdevs_discovered": 4, 00:11:54.704 "num_base_bdevs_operational": 4, 00:11:54.704 "base_bdevs_list": [ 00:11:54.704 { 00:11:54.704 "name": "BaseBdev1", 00:11:54.704 "uuid": "caa1d571-c0eb-5bd4-88b1-29eecfda4406", 00:11:54.704 "is_configured": true, 00:11:54.704 "data_offset": 2048, 00:11:54.704 "data_size": 63488 00:11:54.704 }, 00:11:54.704 { 00:11:54.704 "name": "BaseBdev2", 00:11:54.704 "uuid": "b3ad9039-d898-5997-88d5-4eb1d619ec98", 00:11:54.704 "is_configured": true, 00:11:54.704 "data_offset": 2048, 00:11:54.704 "data_size": 63488 00:11:54.704 }, 00:11:54.704 { 00:11:54.704 "name": "BaseBdev3", 00:11:54.704 "uuid": "c38a6945-87ca-5667-a3b8-545baa5a254b", 00:11:54.704 "is_configured": true, 00:11:54.704 "data_offset": 2048, 00:11:54.704 "data_size": 63488 00:11:54.704 }, 00:11:54.704 { 00:11:54.704 "name": "BaseBdev4", 00:11:54.704 "uuid": "b11d659f-7e3e-5c78-b373-0a75751108c3", 00:11:54.704 "is_configured": true, 00:11:54.704 "data_offset": 2048, 00:11:54.704 "data_size": 63488 00:11:54.704 } 00:11:54.704 ] 00:11:54.704 }' 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.704 02:26:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.275 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:55.275 02:26:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:55.275 [2024-10-13 02:26:13.748166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.218 "name": "raid_bdev1", 00:11:56.218 "uuid": "1da11e41-d751-40ee-a7b9-4e250fd4ef6c", 00:11:56.218 "strip_size_kb": 0, 00:11:56.218 "state": "online", 00:11:56.218 "raid_level": "raid1", 00:11:56.218 "superblock": true, 00:11:56.218 "num_base_bdevs": 4, 00:11:56.218 "num_base_bdevs_discovered": 4, 00:11:56.218 "num_base_bdevs_operational": 4, 00:11:56.218 "base_bdevs_list": [ 00:11:56.218 { 00:11:56.218 "name": "BaseBdev1", 00:11:56.218 "uuid": "caa1d571-c0eb-5bd4-88b1-29eecfda4406", 00:11:56.218 "is_configured": true, 00:11:56.218 "data_offset": 2048, 00:11:56.218 "data_size": 63488 00:11:56.218 }, 00:11:56.218 { 00:11:56.218 "name": "BaseBdev2", 00:11:56.218 "uuid": "b3ad9039-d898-5997-88d5-4eb1d619ec98", 00:11:56.218 "is_configured": true, 00:11:56.218 "data_offset": 2048, 00:11:56.218 "data_size": 63488 00:11:56.218 }, 00:11:56.218 { 00:11:56.218 "name": "BaseBdev3", 00:11:56.218 "uuid": "c38a6945-87ca-5667-a3b8-545baa5a254b", 00:11:56.218 "is_configured": true, 00:11:56.218 "data_offset": 2048, 00:11:56.218 "data_size": 63488 00:11:56.218 }, 00:11:56.218 { 00:11:56.218 "name": "BaseBdev4", 00:11:56.218 "uuid": "b11d659f-7e3e-5c78-b373-0a75751108c3", 00:11:56.218 "is_configured": true, 00:11:56.218 "data_offset": 2048, 00:11:56.218 "data_size": 63488 00:11:56.218 } 00:11:56.218 ] 00:11:56.218 }' 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.218 02:26:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.482 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:56.482 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.482 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.482 [2024-10-13 02:26:15.137772] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:56.482 [2024-10-13 02:26:15.137876] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:56.482 [2024-10-13 02:26:15.140610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:56.482 [2024-10-13 02:26:15.140702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.482 [2024-10-13 02:26:15.140895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:56.482 [2024-10-13 02:26:15.140947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:11:56.482 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.482 { 00:11:56.482 "results": [ 00:11:56.482 { 00:11:56.482 "job": "raid_bdev1", 00:11:56.482 "core_mask": "0x1", 00:11:56.482 "workload": "randrw", 00:11:56.482 "percentage": 50, 00:11:56.482 "status": "finished", 00:11:56.482 "queue_depth": 1, 00:11:56.482 "io_size": 131072, 00:11:56.482 "runtime": 1.390283, 00:11:56.482 "iops": 8493.23483060643, 00:11:56.482 "mibps": 1061.6543538258038, 00:11:56.482 "io_failed": 0, 00:11:56.482 "io_timeout": 0, 00:11:56.482 "avg_latency_us": 115.1568336469391, 00:11:56.482 "min_latency_us": 22.134497816593885, 00:11:56.482 "max_latency_us": 1531.0812227074236 00:11:56.482 } 00:11:56.482 ], 00:11:56.482 "core_count": 1 00:11:56.482 } 00:11:56.482 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85682 00:11:56.482 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 85682 ']' 00:11:56.482 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 85682 00:11:56.482 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:56.482 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:56.482 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85682 00:11:56.748 killing process with pid 85682 00:11:56.748 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:56.748 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:56.748 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85682' 00:11:56.748 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 85682 00:11:56.748 [2024-10-13 02:26:15.186750] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:56.748 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 85682 00:11:56.748 [2024-10-13 02:26:15.255124] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:57.009 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:57.009 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hok0gIQJnl 00:11:57.009 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:57.009 ************************************ 00:11:57.009 END TEST raid_read_error_test 00:11:57.009 ************************************ 00:11:57.009 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:57.009 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:57.009 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:57.009 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:57.009 02:26:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:57.009 00:11:57.009 real 0m3.597s 00:11:57.009 user 0m4.370s 00:11:57.009 sys 0m0.697s 00:11:57.009 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.009 02:26:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.270 02:26:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:57.270 02:26:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:57.270 02:26:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.270 02:26:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:57.270 ************************************ 00:11:57.270 START TEST raid_write_error_test 00:11:57.270 ************************************ 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.51J4L0Vsyo 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85811 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85811 00:11:57.270 02:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 85811 ']' 00:11:57.271 02:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.271 02:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.271 02:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.271 02:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.271 02:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.271 [2024-10-13 02:26:15.822058] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:57.271 [2024-10-13 02:26:15.822195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85811 ] 00:11:57.531 [2024-10-13 02:26:15.953971] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.531 [2024-10-13 02:26:16.026265] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.531 [2024-10-13 02:26:16.105818] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.531 [2024-10-13 02:26:16.105863] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.103 BaseBdev1_malloc 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.103 true 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.103 [2024-10-13 02:26:16.694044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:58.103 [2024-10-13 02:26:16.694150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.103 [2024-10-13 02:26:16.694193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:11:58.103 [2024-10-13 02:26:16.694204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.103 [2024-10-13 02:26:16.696619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.103 [2024-10-13 02:26:16.696655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:58.103 BaseBdev1 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.103 BaseBdev2_malloc 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.103 true 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.103 [2024-10-13 02:26:16.759032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:58.103 [2024-10-13 02:26:16.759115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.103 [2024-10-13 02:26:16.759147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:11:58.103 [2024-10-13 02:26:16.759160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.103 [2024-10-13 02:26:16.762543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.103 [2024-10-13 02:26:16.762648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:58.103 BaseBdev2 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.103 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.364 BaseBdev3_malloc 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.364 true 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.364 [2024-10-13 02:26:16.807275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:58.364 [2024-10-13 02:26:16.807323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.364 [2024-10-13 02:26:16.807344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:11:58.364 [2024-10-13 02:26:16.807353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.364 [2024-10-13 02:26:16.809742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.364 [2024-10-13 02:26:16.809776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:58.364 BaseBdev3 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.364 BaseBdev4_malloc 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.364 true 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.364 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.364 [2024-10-13 02:26:16.854650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:58.364 [2024-10-13 02:26:16.854756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.364 [2024-10-13 02:26:16.854793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:58.364 [2024-10-13 02:26:16.854803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.364 [2024-10-13 02:26:16.857213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.364 [2024-10-13 02:26:16.857248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:58.365 BaseBdev4 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.365 [2024-10-13 02:26:16.866698] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:58.365 [2024-10-13 02:26:16.868910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.365 [2024-10-13 02:26:16.869049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.365 [2024-10-13 02:26:16.869120] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:58.365 [2024-10-13 02:26:16.869330] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:11:58.365 [2024-10-13 02:26:16.869342] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:58.365 [2024-10-13 02:26:16.869595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:11:58.365 [2024-10-13 02:26:16.869764] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:11:58.365 [2024-10-13 02:26:16.869783] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:11:58.365 [2024-10-13 02:26:16.869929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.365 "name": "raid_bdev1", 00:11:58.365 "uuid": "380351e1-579a-4222-969d-75cef2958ab7", 00:11:58.365 "strip_size_kb": 0, 00:11:58.365 "state": "online", 00:11:58.365 "raid_level": "raid1", 00:11:58.365 "superblock": true, 00:11:58.365 "num_base_bdevs": 4, 00:11:58.365 "num_base_bdevs_discovered": 4, 00:11:58.365 "num_base_bdevs_operational": 4, 00:11:58.365 "base_bdevs_list": [ 00:11:58.365 { 00:11:58.365 "name": "BaseBdev1", 00:11:58.365 "uuid": "4d368c22-fa90-53d0-82f2-b1296a9ad601", 00:11:58.365 "is_configured": true, 00:11:58.365 "data_offset": 2048, 00:11:58.365 "data_size": 63488 00:11:58.365 }, 00:11:58.365 { 00:11:58.365 "name": "BaseBdev2", 00:11:58.365 "uuid": "2866b547-e31a-58f2-b8ea-b8947448bf7f", 00:11:58.365 "is_configured": true, 00:11:58.365 "data_offset": 2048, 00:11:58.365 "data_size": 63488 00:11:58.365 }, 00:11:58.365 { 00:11:58.365 "name": "BaseBdev3", 00:11:58.365 "uuid": "96addc97-7f25-53e6-a479-e380c43defe9", 00:11:58.365 "is_configured": true, 00:11:58.365 "data_offset": 2048, 00:11:58.365 "data_size": 63488 00:11:58.365 }, 00:11:58.365 { 00:11:58.365 "name": "BaseBdev4", 00:11:58.365 "uuid": "58c82580-7a27-50d8-90b6-e3cf7eb50c7f", 00:11:58.365 "is_configured": true, 00:11:58.365 "data_offset": 2048, 00:11:58.365 "data_size": 63488 00:11:58.365 } 00:11:58.365 ] 00:11:58.365 }' 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.365 02:26:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.626 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:58.626 02:26:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:58.887 [2024-10-13 02:26:17.402333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.829 [2024-10-13 02:26:18.313837] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:59.829 [2024-10-13 02:26:18.314008] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:59.829 [2024-10-13 02:26:18.314297] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.829 "name": "raid_bdev1", 00:11:59.829 "uuid": "380351e1-579a-4222-969d-75cef2958ab7", 00:11:59.829 "strip_size_kb": 0, 00:11:59.829 "state": "online", 00:11:59.829 "raid_level": "raid1", 00:11:59.829 "superblock": true, 00:11:59.829 "num_base_bdevs": 4, 00:11:59.829 "num_base_bdevs_discovered": 3, 00:11:59.829 "num_base_bdevs_operational": 3, 00:11:59.829 "base_bdevs_list": [ 00:11:59.829 { 00:11:59.829 "name": null, 00:11:59.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.829 "is_configured": false, 00:11:59.829 "data_offset": 0, 00:11:59.829 "data_size": 63488 00:11:59.829 }, 00:11:59.829 { 00:11:59.829 "name": "BaseBdev2", 00:11:59.829 "uuid": "2866b547-e31a-58f2-b8ea-b8947448bf7f", 00:11:59.829 "is_configured": true, 00:11:59.829 "data_offset": 2048, 00:11:59.829 "data_size": 63488 00:11:59.829 }, 00:11:59.829 { 00:11:59.829 "name": "BaseBdev3", 00:11:59.829 "uuid": "96addc97-7f25-53e6-a479-e380c43defe9", 00:11:59.829 "is_configured": true, 00:11:59.829 "data_offset": 2048, 00:11:59.829 "data_size": 63488 00:11:59.829 }, 00:11:59.829 { 00:11:59.829 "name": "BaseBdev4", 00:11:59.829 "uuid": "58c82580-7a27-50d8-90b6-e3cf7eb50c7f", 00:11:59.829 "is_configured": true, 00:11:59.829 "data_offset": 2048, 00:11:59.829 "data_size": 63488 00:11:59.829 } 00:11:59.829 ] 00:11:59.829 }' 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.829 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.400 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:00.400 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.400 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.400 [2024-10-13 02:26:18.798978] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:00.400 [2024-10-13 02:26:18.799017] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.400 [2024-10-13 02:26:18.801587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.400 [2024-10-13 02:26:18.801648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.400 [2024-10-13 02:26:18.801752] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.400 [2024-10-13 02:26:18.801764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:12:00.400 { 00:12:00.400 "results": [ 00:12:00.400 { 00:12:00.400 "job": "raid_bdev1", 00:12:00.400 "core_mask": "0x1", 00:12:00.400 "workload": "randrw", 00:12:00.400 "percentage": 50, 00:12:00.400 "status": "finished", 00:12:00.400 "queue_depth": 1, 00:12:00.400 "io_size": 131072, 00:12:00.400 "runtime": 1.396912, 00:12:00.400 "iops": 9271.879688913832, 00:12:00.400 "mibps": 1158.984961114229, 00:12:00.400 "io_failed": 0, 00:12:00.400 "io_timeout": 0, 00:12:00.400 "avg_latency_us": 105.26316152889675, 00:12:00.400 "min_latency_us": 22.134497816593885, 00:12:00.400 "max_latency_us": 1545.3903930131005 00:12:00.400 } 00:12:00.400 ], 00:12:00.400 "core_count": 1 00:12:00.400 } 00:12:00.400 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.400 02:26:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85811 00:12:00.400 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 85811 ']' 00:12:00.400 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 85811 00:12:00.400 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:00.400 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:00.400 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85811 00:12:00.400 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:00.400 killing process with pid 85811 00:12:00.400 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:00.400 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85811' 00:12:00.400 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 85811 00:12:00.400 [2024-10-13 02:26:18.847491] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:00.400 02:26:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 85811 00:12:00.400 [2024-10-13 02:26:18.914251] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:00.661 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.51J4L0Vsyo 00:12:00.661 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:00.661 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:00.661 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:00.661 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:00.661 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:00.661 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:00.661 ************************************ 00:12:00.661 END TEST raid_write_error_test 00:12:00.661 ************************************ 00:12:00.661 02:26:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:00.661 00:12:00.661 real 0m3.586s 00:12:00.661 user 0m4.357s 00:12:00.661 sys 0m0.692s 00:12:00.661 02:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:00.661 02:26:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.921 02:26:19 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:00.921 02:26:19 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:00.921 02:26:19 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:00.921 02:26:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:00.921 02:26:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:00.921 02:26:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:00.921 ************************************ 00:12:00.921 START TEST raid_rebuild_test 00:12:00.921 ************************************ 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:00.921 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85944 00:12:00.922 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:00.922 02:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85944 00:12:00.922 02:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 85944 ']' 00:12:00.922 02:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.922 02:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:00.922 02:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.922 02:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:00.922 02:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.922 [2024-10-13 02:26:19.470280] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:00.922 [2024-10-13 02:26:19.470536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85944 ] 00:12:00.922 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:00.922 Zero copy mechanism will not be used. 00:12:01.182 [2024-10-13 02:26:19.616181] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.182 [2024-10-13 02:26:19.687681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.182 [2024-10-13 02:26:19.763994] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.182 [2024-10-13 02:26:19.764137] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.753 BaseBdev1_malloc 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.753 [2024-10-13 02:26:20.330620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:01.753 [2024-10-13 02:26:20.330682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.753 [2024-10-13 02:26:20.330709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:01.753 [2024-10-13 02:26:20.330726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.753 [2024-10-13 02:26:20.333164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.753 [2024-10-13 02:26:20.333197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:01.753 BaseBdev1 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.753 BaseBdev2_malloc 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.753 [2024-10-13 02:26:20.376166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:01.753 [2024-10-13 02:26:20.376216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.753 [2024-10-13 02:26:20.376237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:01.753 [2024-10-13 02:26:20.376247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.753 [2024-10-13 02:26:20.378625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.753 [2024-10-13 02:26:20.378712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:01.753 BaseBdev2 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.753 spare_malloc 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.753 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:01.754 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.754 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.754 spare_delay 00:12:01.754 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.754 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:01.754 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.754 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.754 [2024-10-13 02:26:20.422630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:01.754 [2024-10-13 02:26:20.422683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.754 [2024-10-13 02:26:20.422706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:01.754 [2024-10-13 02:26:20.422715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.754 [2024-10-13 02:26:20.425096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.754 [2024-10-13 02:26:20.425167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:01.754 spare 00:12:01.754 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.754 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:01.754 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.754 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.014 [2024-10-13 02:26:20.434669] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.014 [2024-10-13 02:26:20.436785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.014 [2024-10-13 02:26:20.436937] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:02.014 [2024-10-13 02:26:20.436954] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:02.014 [2024-10-13 02:26:20.437252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:02.014 [2024-10-13 02:26:20.437382] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:02.014 [2024-10-13 02:26:20.437396] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:02.014 [2024-10-13 02:26:20.437516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.014 "name": "raid_bdev1", 00:12:02.014 "uuid": "66597048-c923-40bd-9892-bb4bd1d133ee", 00:12:02.014 "strip_size_kb": 0, 00:12:02.014 "state": "online", 00:12:02.014 "raid_level": "raid1", 00:12:02.014 "superblock": false, 00:12:02.014 "num_base_bdevs": 2, 00:12:02.014 "num_base_bdevs_discovered": 2, 00:12:02.014 "num_base_bdevs_operational": 2, 00:12:02.014 "base_bdevs_list": [ 00:12:02.014 { 00:12:02.014 "name": "BaseBdev1", 00:12:02.014 "uuid": "d88b1f3b-7db4-5fe9-872f-31481c472de4", 00:12:02.014 "is_configured": true, 00:12:02.014 "data_offset": 0, 00:12:02.014 "data_size": 65536 00:12:02.014 }, 00:12:02.014 { 00:12:02.014 "name": "BaseBdev2", 00:12:02.014 "uuid": "09f8a8ba-b346-5cbd-9530-fabcb1054704", 00:12:02.014 "is_configured": true, 00:12:02.014 "data_offset": 0, 00:12:02.014 "data_size": 65536 00:12:02.014 } 00:12:02.014 ] 00:12:02.014 }' 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.014 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.275 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:02.275 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.275 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.275 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:02.275 [2024-10-13 02:26:20.934161] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:02.275 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.535 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:02.535 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.535 02:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:02.535 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.535 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.535 02:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.535 02:26:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:02.535 02:26:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:02.535 02:26:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:02.535 02:26:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:02.535 02:26:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:02.535 02:26:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:02.535 02:26:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:02.535 02:26:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:02.535 02:26:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:02.535 02:26:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:02.535 02:26:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:02.535 02:26:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:02.535 02:26:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:02.535 02:26:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:02.535 [2024-10-13 02:26:21.201475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:02.795 /dev/nbd0 00:12:02.795 02:26:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:02.795 02:26:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:02.795 02:26:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:02.795 02:26:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:02.795 02:26:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:02.796 02:26:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:02.796 02:26:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:02.796 02:26:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:02.796 02:26:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:02.796 02:26:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:02.796 02:26:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:02.796 1+0 records in 00:12:02.796 1+0 records out 00:12:02.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361876 s, 11.3 MB/s 00:12:02.796 02:26:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.796 02:26:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:02.796 02:26:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.796 02:26:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:02.796 02:26:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:02.796 02:26:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:02.796 02:26:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:02.796 02:26:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:02.796 02:26:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:02.796 02:26:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:06.999 65536+0 records in 00:12:06.999 65536+0 records out 00:12:06.999 33554432 bytes (34 MB, 32 MiB) copied, 4.3527 s, 7.7 MB/s 00:12:06.999 02:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:06.999 02:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:06.999 02:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:06.999 02:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:06.999 02:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:06.999 02:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.999 02:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:07.260 [2024-10-13 02:26:25.819137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.260 [2024-10-13 02:26:25.855151] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.260 "name": "raid_bdev1", 00:12:07.260 "uuid": "66597048-c923-40bd-9892-bb4bd1d133ee", 00:12:07.260 "strip_size_kb": 0, 00:12:07.260 "state": "online", 00:12:07.260 "raid_level": "raid1", 00:12:07.260 "superblock": false, 00:12:07.260 "num_base_bdevs": 2, 00:12:07.260 "num_base_bdevs_discovered": 1, 00:12:07.260 "num_base_bdevs_operational": 1, 00:12:07.260 "base_bdevs_list": [ 00:12:07.260 { 00:12:07.260 "name": null, 00:12:07.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.260 "is_configured": false, 00:12:07.260 "data_offset": 0, 00:12:07.260 "data_size": 65536 00:12:07.260 }, 00:12:07.260 { 00:12:07.260 "name": "BaseBdev2", 00:12:07.260 "uuid": "09f8a8ba-b346-5cbd-9530-fabcb1054704", 00:12:07.260 "is_configured": true, 00:12:07.260 "data_offset": 0, 00:12:07.260 "data_size": 65536 00:12:07.260 } 00:12:07.260 ] 00:12:07.260 }' 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.260 02:26:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.830 02:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:07.830 02:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.830 02:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.830 [2024-10-13 02:26:26.294485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:07.830 [2024-10-13 02:26:26.298849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:12:07.830 02:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.830 02:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:07.830 [2024-10-13 02:26:26.300644] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:08.768 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.768 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.768 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.768 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.768 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.768 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.768 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.768 02:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.768 02:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.768 02:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.768 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.768 "name": "raid_bdev1", 00:12:08.768 "uuid": "66597048-c923-40bd-9892-bb4bd1d133ee", 00:12:08.768 "strip_size_kb": 0, 00:12:08.768 "state": "online", 00:12:08.768 "raid_level": "raid1", 00:12:08.768 "superblock": false, 00:12:08.768 "num_base_bdevs": 2, 00:12:08.768 "num_base_bdevs_discovered": 2, 00:12:08.768 "num_base_bdevs_operational": 2, 00:12:08.768 "process": { 00:12:08.768 "type": "rebuild", 00:12:08.768 "target": "spare", 00:12:08.768 "progress": { 00:12:08.768 "blocks": 20480, 00:12:08.768 "percent": 31 00:12:08.768 } 00:12:08.768 }, 00:12:08.768 "base_bdevs_list": [ 00:12:08.768 { 00:12:08.768 "name": "spare", 00:12:08.768 "uuid": "b1451908-6d3d-51d0-98a9-56dbe36d66c9", 00:12:08.768 "is_configured": true, 00:12:08.768 "data_offset": 0, 00:12:08.768 "data_size": 65536 00:12:08.768 }, 00:12:08.768 { 00:12:08.768 "name": "BaseBdev2", 00:12:08.768 "uuid": "09f8a8ba-b346-5cbd-9530-fabcb1054704", 00:12:08.768 "is_configured": true, 00:12:08.768 "data_offset": 0, 00:12:08.768 "data_size": 65536 00:12:08.768 } 00:12:08.768 ] 00:12:08.768 }' 00:12:08.768 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.768 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:08.768 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.768 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:08.768 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:08.768 02:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.768 02:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.768 [2024-10-13 02:26:27.433824] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:09.027 [2024-10-13 02:26:27.506283] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:09.027 [2024-10-13 02:26:27.506460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.027 [2024-10-13 02:26:27.506524] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:09.027 [2024-10-13 02:26:27.506548] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:09.027 02:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.028 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:09.028 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.028 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.028 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.028 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.028 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:09.028 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.028 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.028 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.028 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.028 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.028 02:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.028 02:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.028 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.028 02:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.028 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.028 "name": "raid_bdev1", 00:12:09.028 "uuid": "66597048-c923-40bd-9892-bb4bd1d133ee", 00:12:09.028 "strip_size_kb": 0, 00:12:09.028 "state": "online", 00:12:09.028 "raid_level": "raid1", 00:12:09.028 "superblock": false, 00:12:09.028 "num_base_bdevs": 2, 00:12:09.028 "num_base_bdevs_discovered": 1, 00:12:09.028 "num_base_bdevs_operational": 1, 00:12:09.028 "base_bdevs_list": [ 00:12:09.028 { 00:12:09.028 "name": null, 00:12:09.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.028 "is_configured": false, 00:12:09.028 "data_offset": 0, 00:12:09.028 "data_size": 65536 00:12:09.028 }, 00:12:09.028 { 00:12:09.028 "name": "BaseBdev2", 00:12:09.028 "uuid": "09f8a8ba-b346-5cbd-9530-fabcb1054704", 00:12:09.028 "is_configured": true, 00:12:09.028 "data_offset": 0, 00:12:09.028 "data_size": 65536 00:12:09.028 } 00:12:09.028 ] 00:12:09.028 }' 00:12:09.028 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.028 02:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.286 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:09.286 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.286 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:09.286 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:09.286 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.286 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.286 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.286 02:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.286 02:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.286 02:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.546 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.546 "name": "raid_bdev1", 00:12:09.546 "uuid": "66597048-c923-40bd-9892-bb4bd1d133ee", 00:12:09.546 "strip_size_kb": 0, 00:12:09.546 "state": "online", 00:12:09.546 "raid_level": "raid1", 00:12:09.546 "superblock": false, 00:12:09.546 "num_base_bdevs": 2, 00:12:09.546 "num_base_bdevs_discovered": 1, 00:12:09.546 "num_base_bdevs_operational": 1, 00:12:09.546 "base_bdevs_list": [ 00:12:09.546 { 00:12:09.546 "name": null, 00:12:09.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.546 "is_configured": false, 00:12:09.546 "data_offset": 0, 00:12:09.546 "data_size": 65536 00:12:09.546 }, 00:12:09.546 { 00:12:09.546 "name": "BaseBdev2", 00:12:09.546 "uuid": "09f8a8ba-b346-5cbd-9530-fabcb1054704", 00:12:09.546 "is_configured": true, 00:12:09.546 "data_offset": 0, 00:12:09.546 "data_size": 65536 00:12:09.546 } 00:12:09.546 ] 00:12:09.546 }' 00:12:09.546 02:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.546 02:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:09.546 02:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.546 02:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:09.546 02:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:09.546 02:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.546 02:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.546 [2024-10-13 02:26:28.078907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:09.546 [2024-10-13 02:26:28.083156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d062f0 00:12:09.546 02:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.546 02:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:09.546 [2024-10-13 02:26:28.084961] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:10.495 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.495 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.495 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.495 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.495 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.495 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.495 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.495 02:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.495 02:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.495 02:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.495 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.495 "name": "raid_bdev1", 00:12:10.495 "uuid": "66597048-c923-40bd-9892-bb4bd1d133ee", 00:12:10.495 "strip_size_kb": 0, 00:12:10.495 "state": "online", 00:12:10.495 "raid_level": "raid1", 00:12:10.495 "superblock": false, 00:12:10.495 "num_base_bdevs": 2, 00:12:10.496 "num_base_bdevs_discovered": 2, 00:12:10.496 "num_base_bdevs_operational": 2, 00:12:10.496 "process": { 00:12:10.496 "type": "rebuild", 00:12:10.496 "target": "spare", 00:12:10.496 "progress": { 00:12:10.496 "blocks": 20480, 00:12:10.496 "percent": 31 00:12:10.496 } 00:12:10.496 }, 00:12:10.496 "base_bdevs_list": [ 00:12:10.496 { 00:12:10.496 "name": "spare", 00:12:10.496 "uuid": "b1451908-6d3d-51d0-98a9-56dbe36d66c9", 00:12:10.496 "is_configured": true, 00:12:10.496 "data_offset": 0, 00:12:10.496 "data_size": 65536 00:12:10.496 }, 00:12:10.496 { 00:12:10.496 "name": "BaseBdev2", 00:12:10.496 "uuid": "09f8a8ba-b346-5cbd-9530-fabcb1054704", 00:12:10.496 "is_configured": true, 00:12:10.496 "data_offset": 0, 00:12:10.496 "data_size": 65536 00:12:10.496 } 00:12:10.496 ] 00:12:10.496 }' 00:12:10.496 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=297 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.769 "name": "raid_bdev1", 00:12:10.769 "uuid": "66597048-c923-40bd-9892-bb4bd1d133ee", 00:12:10.769 "strip_size_kb": 0, 00:12:10.769 "state": "online", 00:12:10.769 "raid_level": "raid1", 00:12:10.769 "superblock": false, 00:12:10.769 "num_base_bdevs": 2, 00:12:10.769 "num_base_bdevs_discovered": 2, 00:12:10.769 "num_base_bdevs_operational": 2, 00:12:10.769 "process": { 00:12:10.769 "type": "rebuild", 00:12:10.769 "target": "spare", 00:12:10.769 "progress": { 00:12:10.769 "blocks": 22528, 00:12:10.769 "percent": 34 00:12:10.769 } 00:12:10.769 }, 00:12:10.769 "base_bdevs_list": [ 00:12:10.769 { 00:12:10.769 "name": "spare", 00:12:10.769 "uuid": "b1451908-6d3d-51d0-98a9-56dbe36d66c9", 00:12:10.769 "is_configured": true, 00:12:10.769 "data_offset": 0, 00:12:10.769 "data_size": 65536 00:12:10.769 }, 00:12:10.769 { 00:12:10.769 "name": "BaseBdev2", 00:12:10.769 "uuid": "09f8a8ba-b346-5cbd-9530-fabcb1054704", 00:12:10.769 "is_configured": true, 00:12:10.769 "data_offset": 0, 00:12:10.769 "data_size": 65536 00:12:10.769 } 00:12:10.769 ] 00:12:10.769 }' 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.769 02:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:11.708 02:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:11.708 02:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:11.708 02:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.708 02:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:11.708 02:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:11.708 02:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.708 02:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.708 02:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.708 02:26:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.708 02:26:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.968 02:26:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.968 02:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.968 "name": "raid_bdev1", 00:12:11.968 "uuid": "66597048-c923-40bd-9892-bb4bd1d133ee", 00:12:11.968 "strip_size_kb": 0, 00:12:11.968 "state": "online", 00:12:11.968 "raid_level": "raid1", 00:12:11.968 "superblock": false, 00:12:11.968 "num_base_bdevs": 2, 00:12:11.968 "num_base_bdevs_discovered": 2, 00:12:11.968 "num_base_bdevs_operational": 2, 00:12:11.968 "process": { 00:12:11.968 "type": "rebuild", 00:12:11.968 "target": "spare", 00:12:11.968 "progress": { 00:12:11.968 "blocks": 45056, 00:12:11.968 "percent": 68 00:12:11.968 } 00:12:11.968 }, 00:12:11.968 "base_bdevs_list": [ 00:12:11.968 { 00:12:11.968 "name": "spare", 00:12:11.968 "uuid": "b1451908-6d3d-51d0-98a9-56dbe36d66c9", 00:12:11.968 "is_configured": true, 00:12:11.968 "data_offset": 0, 00:12:11.968 "data_size": 65536 00:12:11.968 }, 00:12:11.968 { 00:12:11.968 "name": "BaseBdev2", 00:12:11.968 "uuid": "09f8a8ba-b346-5cbd-9530-fabcb1054704", 00:12:11.968 "is_configured": true, 00:12:11.968 "data_offset": 0, 00:12:11.968 "data_size": 65536 00:12:11.968 } 00:12:11.968 ] 00:12:11.968 }' 00:12:11.968 02:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.968 02:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:11.968 02:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.968 02:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:11.968 02:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:12.906 [2024-10-13 02:26:31.297938] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:12.906 [2024-10-13 02:26:31.298060] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:12.906 [2024-10-13 02:26:31.298122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.906 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:12.906 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.906 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.906 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.906 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.906 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.906 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.906 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.906 02:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.906 02:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.906 02:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.906 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.906 "name": "raid_bdev1", 00:12:12.906 "uuid": "66597048-c923-40bd-9892-bb4bd1d133ee", 00:12:12.906 "strip_size_kb": 0, 00:12:12.906 "state": "online", 00:12:12.906 "raid_level": "raid1", 00:12:12.906 "superblock": false, 00:12:12.906 "num_base_bdevs": 2, 00:12:12.906 "num_base_bdevs_discovered": 2, 00:12:12.906 "num_base_bdevs_operational": 2, 00:12:12.906 "base_bdevs_list": [ 00:12:12.906 { 00:12:12.906 "name": "spare", 00:12:12.906 "uuid": "b1451908-6d3d-51d0-98a9-56dbe36d66c9", 00:12:12.906 "is_configured": true, 00:12:12.906 "data_offset": 0, 00:12:12.906 "data_size": 65536 00:12:12.906 }, 00:12:12.906 { 00:12:12.906 "name": "BaseBdev2", 00:12:12.906 "uuid": "09f8a8ba-b346-5cbd-9530-fabcb1054704", 00:12:12.906 "is_configured": true, 00:12:12.906 "data_offset": 0, 00:12:12.906 "data_size": 65536 00:12:12.906 } 00:12:12.906 ] 00:12:12.906 }' 00:12:12.906 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.166 "name": "raid_bdev1", 00:12:13.166 "uuid": "66597048-c923-40bd-9892-bb4bd1d133ee", 00:12:13.166 "strip_size_kb": 0, 00:12:13.166 "state": "online", 00:12:13.166 "raid_level": "raid1", 00:12:13.166 "superblock": false, 00:12:13.166 "num_base_bdevs": 2, 00:12:13.166 "num_base_bdevs_discovered": 2, 00:12:13.166 "num_base_bdevs_operational": 2, 00:12:13.166 "base_bdevs_list": [ 00:12:13.166 { 00:12:13.166 "name": "spare", 00:12:13.166 "uuid": "b1451908-6d3d-51d0-98a9-56dbe36d66c9", 00:12:13.166 "is_configured": true, 00:12:13.166 "data_offset": 0, 00:12:13.166 "data_size": 65536 00:12:13.166 }, 00:12:13.166 { 00:12:13.166 "name": "BaseBdev2", 00:12:13.166 "uuid": "09f8a8ba-b346-5cbd-9530-fabcb1054704", 00:12:13.166 "is_configured": true, 00:12:13.166 "data_offset": 0, 00:12:13.166 "data_size": 65536 00:12:13.166 } 00:12:13.166 ] 00:12:13.166 }' 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.166 02:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.426 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.426 "name": "raid_bdev1", 00:12:13.426 "uuid": "66597048-c923-40bd-9892-bb4bd1d133ee", 00:12:13.426 "strip_size_kb": 0, 00:12:13.426 "state": "online", 00:12:13.426 "raid_level": "raid1", 00:12:13.426 "superblock": false, 00:12:13.426 "num_base_bdevs": 2, 00:12:13.426 "num_base_bdevs_discovered": 2, 00:12:13.426 "num_base_bdevs_operational": 2, 00:12:13.426 "base_bdevs_list": [ 00:12:13.426 { 00:12:13.426 "name": "spare", 00:12:13.426 "uuid": "b1451908-6d3d-51d0-98a9-56dbe36d66c9", 00:12:13.426 "is_configured": true, 00:12:13.426 "data_offset": 0, 00:12:13.426 "data_size": 65536 00:12:13.426 }, 00:12:13.426 { 00:12:13.426 "name": "BaseBdev2", 00:12:13.426 "uuid": "09f8a8ba-b346-5cbd-9530-fabcb1054704", 00:12:13.426 "is_configured": true, 00:12:13.426 "data_offset": 0, 00:12:13.426 "data_size": 65536 00:12:13.426 } 00:12:13.426 ] 00:12:13.426 }' 00:12:13.426 02:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.426 02:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.687 [2024-10-13 02:26:32.252949] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:13.687 [2024-10-13 02:26:32.253087] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:13.687 [2024-10-13 02:26:32.253205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.687 [2024-10-13 02:26:32.253305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:13.687 [2024-10-13 02:26:32.253355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:13.687 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:13.947 /dev/nbd0 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.947 1+0 records in 00:12:13.947 1+0 records out 00:12:13.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407067 s, 10.1 MB/s 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:13.947 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:14.208 /dev/nbd1 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:14.208 1+0 records in 00:12:14.208 1+0 records out 00:12:14.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417574 s, 9.8 MB/s 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:14.208 02:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:14.467 02:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:14.467 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:14.467 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:14.467 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:14.467 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:14.467 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.467 02:26:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:14.467 02:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:14.467 02:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:14.467 02:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:14.467 02:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.467 02:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.467 02:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:14.467 02:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:14.467 02:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.467 02:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.467 02:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:14.727 02:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85944 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 85944 ']' 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 85944 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85944 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85944' 00:12:14.728 killing process with pid 85944 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 85944 00:12:14.728 Received shutdown signal, test time was about 60.000000 seconds 00:12:14.728 00:12:14.728 Latency(us) 00:12:14.728 [2024-10-13T02:26:33.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.728 [2024-10-13T02:26:33.412Z] =================================================================================================================== 00:12:14.728 [2024-10-13T02:26:33.412Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:14.728 [2024-10-13 02:26:33.389462] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.728 02:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 85944 00:12:14.987 [2024-10-13 02:26:33.419098] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:14.987 02:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:14.987 00:12:14.987 real 0m14.281s 00:12:14.987 user 0m15.879s 00:12:14.987 sys 0m3.197s 00:12:14.987 02:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.987 ************************************ 00:12:14.987 END TEST raid_rebuild_test 00:12:14.987 ************************************ 00:12:14.987 02:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.247 02:26:33 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:15.247 02:26:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:15.247 02:26:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.247 02:26:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:15.247 ************************************ 00:12:15.247 START TEST raid_rebuild_test_sb 00:12:15.247 ************************************ 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86353 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86353 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86353 ']' 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.247 02:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:15.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.248 02:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.248 02:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:15.248 02:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.248 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:15.248 Zero copy mechanism will not be used. 00:12:15.248 [2024-10-13 02:26:33.826560] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:15.248 [2024-10-13 02:26:33.826689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86353 ] 00:12:15.507 [2024-10-13 02:26:33.971244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.507 [2024-10-13 02:26:34.017277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.507 [2024-10-13 02:26:34.059743] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.507 [2024-10-13 02:26:34.059782] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.077 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:16.077 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:16.077 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:16.077 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:16.077 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.077 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.077 BaseBdev1_malloc 00:12:16.077 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.077 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:16.077 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.078 [2024-10-13 02:26:34.670387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:16.078 [2024-10-13 02:26:34.670455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.078 [2024-10-13 02:26:34.670489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:16.078 [2024-10-13 02:26:34.670508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.078 [2024-10-13 02:26:34.672576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.078 [2024-10-13 02:26:34.672612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:16.078 BaseBdev1 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.078 BaseBdev2_malloc 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.078 [2024-10-13 02:26:34.716144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:16.078 [2024-10-13 02:26:34.716245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.078 [2024-10-13 02:26:34.716291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:16.078 [2024-10-13 02:26:34.716313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.078 [2024-10-13 02:26:34.721160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.078 [2024-10-13 02:26:34.721359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:16.078 BaseBdev2 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.078 spare_malloc 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.078 spare_delay 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.078 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.338 [2024-10-13 02:26:34.759364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:16.338 [2024-10-13 02:26:34.759418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.338 [2024-10-13 02:26:34.759440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:16.338 [2024-10-13 02:26:34.759449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.338 [2024-10-13 02:26:34.761494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.338 [2024-10-13 02:26:34.761530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:16.338 spare 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.338 [2024-10-13 02:26:34.771408] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.338 [2024-10-13 02:26:34.773250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:16.338 [2024-10-13 02:26:34.773400] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:16.338 [2024-10-13 02:26:34.773412] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:16.338 [2024-10-13 02:26:34.773653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:16.338 [2024-10-13 02:26:34.773790] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:16.338 [2024-10-13 02:26:34.773802] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:16.338 [2024-10-13 02:26:34.773934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.338 "name": "raid_bdev1", 00:12:16.338 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:16.338 "strip_size_kb": 0, 00:12:16.338 "state": "online", 00:12:16.338 "raid_level": "raid1", 00:12:16.338 "superblock": true, 00:12:16.338 "num_base_bdevs": 2, 00:12:16.338 "num_base_bdevs_discovered": 2, 00:12:16.338 "num_base_bdevs_operational": 2, 00:12:16.338 "base_bdevs_list": [ 00:12:16.338 { 00:12:16.338 "name": "BaseBdev1", 00:12:16.338 "uuid": "e61ade8b-c35e-5178-b17b-0ed7be86d96d", 00:12:16.338 "is_configured": true, 00:12:16.338 "data_offset": 2048, 00:12:16.338 "data_size": 63488 00:12:16.338 }, 00:12:16.338 { 00:12:16.338 "name": "BaseBdev2", 00:12:16.338 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:16.338 "is_configured": true, 00:12:16.338 "data_offset": 2048, 00:12:16.338 "data_size": 63488 00:12:16.338 } 00:12:16.338 ] 00:12:16.338 }' 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.338 02:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.598 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:16.598 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:16.598 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.598 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.598 [2024-10-13 02:26:35.238928] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.598 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.598 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:16.598 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.598 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:16.598 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.598 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.859 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.859 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:16.859 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:16.859 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:16.859 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:16.859 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:16.859 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:16.859 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:16.859 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:16.859 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:16.859 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:16.859 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:16.859 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:16.859 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:16.859 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:16.859 [2024-10-13 02:26:35.514216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:16.859 /dev/nbd0 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.119 1+0 records in 00:12:17.119 1+0 records out 00:12:17.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341219 s, 12.0 MB/s 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:17.119 02:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:21.317 63488+0 records in 00:12:21.317 63488+0 records out 00:12:21.317 32505856 bytes (33 MB, 31 MiB) copied, 4.13259 s, 7.9 MB/s 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:21.317 [2024-10-13 02:26:39.915007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.317 [2024-10-13 02:26:39.951002] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.317 02:26:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.577 02:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.577 "name": "raid_bdev1", 00:12:21.577 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:21.577 "strip_size_kb": 0, 00:12:21.577 "state": "online", 00:12:21.577 "raid_level": "raid1", 00:12:21.577 "superblock": true, 00:12:21.577 "num_base_bdevs": 2, 00:12:21.577 "num_base_bdevs_discovered": 1, 00:12:21.577 "num_base_bdevs_operational": 1, 00:12:21.577 "base_bdevs_list": [ 00:12:21.577 { 00:12:21.577 "name": null, 00:12:21.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.577 "is_configured": false, 00:12:21.577 "data_offset": 0, 00:12:21.577 "data_size": 63488 00:12:21.577 }, 00:12:21.577 { 00:12:21.577 "name": "BaseBdev2", 00:12:21.577 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:21.577 "is_configured": true, 00:12:21.577 "data_offset": 2048, 00:12:21.577 "data_size": 63488 00:12:21.577 } 00:12:21.577 ] 00:12:21.577 }' 00:12:21.577 02:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.577 02:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.837 02:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:21.837 02:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.837 02:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.837 [2024-10-13 02:26:40.406332] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:21.837 [2024-10-13 02:26:40.410536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:12:21.837 02:26:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.837 02:26:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:21.837 [2024-10-13 02:26:40.412622] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:22.787 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:22.787 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.787 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:22.787 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:22.787 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.787 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.787 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.787 02:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.787 02:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.787 02:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.052 "name": "raid_bdev1", 00:12:23.052 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:23.052 "strip_size_kb": 0, 00:12:23.052 "state": "online", 00:12:23.052 "raid_level": "raid1", 00:12:23.052 "superblock": true, 00:12:23.052 "num_base_bdevs": 2, 00:12:23.052 "num_base_bdevs_discovered": 2, 00:12:23.052 "num_base_bdevs_operational": 2, 00:12:23.052 "process": { 00:12:23.052 "type": "rebuild", 00:12:23.052 "target": "spare", 00:12:23.052 "progress": { 00:12:23.052 "blocks": 20480, 00:12:23.052 "percent": 32 00:12:23.052 } 00:12:23.052 }, 00:12:23.052 "base_bdevs_list": [ 00:12:23.052 { 00:12:23.052 "name": "spare", 00:12:23.052 "uuid": "ae814411-66bc-5f99-b5a7-318fdb03d1b7", 00:12:23.052 "is_configured": true, 00:12:23.052 "data_offset": 2048, 00:12:23.052 "data_size": 63488 00:12:23.052 }, 00:12:23.052 { 00:12:23.052 "name": "BaseBdev2", 00:12:23.052 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:23.052 "is_configured": true, 00:12:23.052 "data_offset": 2048, 00:12:23.052 "data_size": 63488 00:12:23.052 } 00:12:23.052 ] 00:12:23.052 }' 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.052 [2024-10-13 02:26:41.557454] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.052 [2024-10-13 02:26:41.617348] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:23.052 [2024-10-13 02:26:41.617423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.052 [2024-10-13 02:26:41.617441] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.052 [2024-10-13 02:26:41.617448] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.052 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.052 "name": "raid_bdev1", 00:12:23.052 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:23.052 "strip_size_kb": 0, 00:12:23.052 "state": "online", 00:12:23.052 "raid_level": "raid1", 00:12:23.052 "superblock": true, 00:12:23.052 "num_base_bdevs": 2, 00:12:23.052 "num_base_bdevs_discovered": 1, 00:12:23.052 "num_base_bdevs_operational": 1, 00:12:23.052 "base_bdevs_list": [ 00:12:23.052 { 00:12:23.052 "name": null, 00:12:23.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.052 "is_configured": false, 00:12:23.052 "data_offset": 0, 00:12:23.052 "data_size": 63488 00:12:23.052 }, 00:12:23.052 { 00:12:23.052 "name": "BaseBdev2", 00:12:23.052 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:23.052 "is_configured": true, 00:12:23.052 "data_offset": 2048, 00:12:23.052 "data_size": 63488 00:12:23.052 } 00:12:23.052 ] 00:12:23.052 }' 00:12:23.053 02:26:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.053 02:26:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.622 "name": "raid_bdev1", 00:12:23.622 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:23.622 "strip_size_kb": 0, 00:12:23.622 "state": "online", 00:12:23.622 "raid_level": "raid1", 00:12:23.622 "superblock": true, 00:12:23.622 "num_base_bdevs": 2, 00:12:23.622 "num_base_bdevs_discovered": 1, 00:12:23.622 "num_base_bdevs_operational": 1, 00:12:23.622 "base_bdevs_list": [ 00:12:23.622 { 00:12:23.622 "name": null, 00:12:23.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.622 "is_configured": false, 00:12:23.622 "data_offset": 0, 00:12:23.622 "data_size": 63488 00:12:23.622 }, 00:12:23.622 { 00:12:23.622 "name": "BaseBdev2", 00:12:23.622 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:23.622 "is_configured": true, 00:12:23.622 "data_offset": 2048, 00:12:23.622 "data_size": 63488 00:12:23.622 } 00:12:23.622 ] 00:12:23.622 }' 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.622 [2024-10-13 02:26:42.208974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:23.622 [2024-10-13 02:26:42.213114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e350 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.622 02:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:23.622 [2024-10-13 02:26:42.215011] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:24.562 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:24.562 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.562 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:24.562 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:24.562 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.562 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.562 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.562 02:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.562 02:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.823 "name": "raid_bdev1", 00:12:24.823 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:24.823 "strip_size_kb": 0, 00:12:24.823 "state": "online", 00:12:24.823 "raid_level": "raid1", 00:12:24.823 "superblock": true, 00:12:24.823 "num_base_bdevs": 2, 00:12:24.823 "num_base_bdevs_discovered": 2, 00:12:24.823 "num_base_bdevs_operational": 2, 00:12:24.823 "process": { 00:12:24.823 "type": "rebuild", 00:12:24.823 "target": "spare", 00:12:24.823 "progress": { 00:12:24.823 "blocks": 20480, 00:12:24.823 "percent": 32 00:12:24.823 } 00:12:24.823 }, 00:12:24.823 "base_bdevs_list": [ 00:12:24.823 { 00:12:24.823 "name": "spare", 00:12:24.823 "uuid": "ae814411-66bc-5f99-b5a7-318fdb03d1b7", 00:12:24.823 "is_configured": true, 00:12:24.823 "data_offset": 2048, 00:12:24.823 "data_size": 63488 00:12:24.823 }, 00:12:24.823 { 00:12:24.823 "name": "BaseBdev2", 00:12:24.823 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:24.823 "is_configured": true, 00:12:24.823 "data_offset": 2048, 00:12:24.823 "data_size": 63488 00:12:24.823 } 00:12:24.823 ] 00:12:24.823 }' 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:24.823 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=311 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.823 "name": "raid_bdev1", 00:12:24.823 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:24.823 "strip_size_kb": 0, 00:12:24.823 "state": "online", 00:12:24.823 "raid_level": "raid1", 00:12:24.823 "superblock": true, 00:12:24.823 "num_base_bdevs": 2, 00:12:24.823 "num_base_bdevs_discovered": 2, 00:12:24.823 "num_base_bdevs_operational": 2, 00:12:24.823 "process": { 00:12:24.823 "type": "rebuild", 00:12:24.823 "target": "spare", 00:12:24.823 "progress": { 00:12:24.823 "blocks": 22528, 00:12:24.823 "percent": 35 00:12:24.823 } 00:12:24.823 }, 00:12:24.823 "base_bdevs_list": [ 00:12:24.823 { 00:12:24.823 "name": "spare", 00:12:24.823 "uuid": "ae814411-66bc-5f99-b5a7-318fdb03d1b7", 00:12:24.823 "is_configured": true, 00:12:24.823 "data_offset": 2048, 00:12:24.823 "data_size": 63488 00:12:24.823 }, 00:12:24.823 { 00:12:24.823 "name": "BaseBdev2", 00:12:24.823 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:24.823 "is_configured": true, 00:12:24.823 "data_offset": 2048, 00:12:24.823 "data_size": 63488 00:12:24.823 } 00:12:24.823 ] 00:12:24.823 }' 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:24.823 02:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:26.204 02:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:26.204 02:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.204 02:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.204 02:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.204 02:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.204 02:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.204 02:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.204 02:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.204 02:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.204 02:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.204 02:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.204 02:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.204 "name": "raid_bdev1", 00:12:26.204 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:26.204 "strip_size_kb": 0, 00:12:26.204 "state": "online", 00:12:26.204 "raid_level": "raid1", 00:12:26.204 "superblock": true, 00:12:26.204 "num_base_bdevs": 2, 00:12:26.204 "num_base_bdevs_discovered": 2, 00:12:26.204 "num_base_bdevs_operational": 2, 00:12:26.204 "process": { 00:12:26.204 "type": "rebuild", 00:12:26.204 "target": "spare", 00:12:26.204 "progress": { 00:12:26.204 "blocks": 45056, 00:12:26.204 "percent": 70 00:12:26.204 } 00:12:26.204 }, 00:12:26.204 "base_bdevs_list": [ 00:12:26.204 { 00:12:26.204 "name": "spare", 00:12:26.204 "uuid": "ae814411-66bc-5f99-b5a7-318fdb03d1b7", 00:12:26.204 "is_configured": true, 00:12:26.204 "data_offset": 2048, 00:12:26.204 "data_size": 63488 00:12:26.204 }, 00:12:26.204 { 00:12:26.204 "name": "BaseBdev2", 00:12:26.204 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:26.204 "is_configured": true, 00:12:26.204 "data_offset": 2048, 00:12:26.204 "data_size": 63488 00:12:26.204 } 00:12:26.204 ] 00:12:26.204 }' 00:12:26.204 02:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.204 02:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.204 02:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.204 02:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.204 02:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:26.772 [2024-10-13 02:26:45.326637] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:26.772 [2024-10-13 02:26:45.326727] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:26.772 [2024-10-13 02:26:45.326840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.031 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:27.031 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.031 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.031 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.031 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.031 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.031 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.031 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.031 02:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.031 02:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.031 02:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.031 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.031 "name": "raid_bdev1", 00:12:27.031 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:27.031 "strip_size_kb": 0, 00:12:27.031 "state": "online", 00:12:27.031 "raid_level": "raid1", 00:12:27.031 "superblock": true, 00:12:27.031 "num_base_bdevs": 2, 00:12:27.031 "num_base_bdevs_discovered": 2, 00:12:27.031 "num_base_bdevs_operational": 2, 00:12:27.031 "base_bdevs_list": [ 00:12:27.031 { 00:12:27.031 "name": "spare", 00:12:27.031 "uuid": "ae814411-66bc-5f99-b5a7-318fdb03d1b7", 00:12:27.031 "is_configured": true, 00:12:27.031 "data_offset": 2048, 00:12:27.031 "data_size": 63488 00:12:27.031 }, 00:12:27.031 { 00:12:27.031 "name": "BaseBdev2", 00:12:27.031 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:27.031 "is_configured": true, 00:12:27.031 "data_offset": 2048, 00:12:27.031 "data_size": 63488 00:12:27.031 } 00:12:27.031 ] 00:12:27.031 }' 00:12:27.031 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.330 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:27.330 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.330 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:27.330 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:27.330 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:27.330 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.330 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:27.330 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:27.330 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.330 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.330 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.330 02:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.330 02:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.330 02:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.330 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.330 "name": "raid_bdev1", 00:12:27.330 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:27.330 "strip_size_kb": 0, 00:12:27.330 "state": "online", 00:12:27.330 "raid_level": "raid1", 00:12:27.330 "superblock": true, 00:12:27.330 "num_base_bdevs": 2, 00:12:27.331 "num_base_bdevs_discovered": 2, 00:12:27.331 "num_base_bdevs_operational": 2, 00:12:27.331 "base_bdevs_list": [ 00:12:27.331 { 00:12:27.331 "name": "spare", 00:12:27.331 "uuid": "ae814411-66bc-5f99-b5a7-318fdb03d1b7", 00:12:27.331 "is_configured": true, 00:12:27.331 "data_offset": 2048, 00:12:27.331 "data_size": 63488 00:12:27.331 }, 00:12:27.331 { 00:12:27.331 "name": "BaseBdev2", 00:12:27.331 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:27.331 "is_configured": true, 00:12:27.331 "data_offset": 2048, 00:12:27.331 "data_size": 63488 00:12:27.331 } 00:12:27.331 ] 00:12:27.331 }' 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.331 "name": "raid_bdev1", 00:12:27.331 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:27.331 "strip_size_kb": 0, 00:12:27.331 "state": "online", 00:12:27.331 "raid_level": "raid1", 00:12:27.331 "superblock": true, 00:12:27.331 "num_base_bdevs": 2, 00:12:27.331 "num_base_bdevs_discovered": 2, 00:12:27.331 "num_base_bdevs_operational": 2, 00:12:27.331 "base_bdevs_list": [ 00:12:27.331 { 00:12:27.331 "name": "spare", 00:12:27.331 "uuid": "ae814411-66bc-5f99-b5a7-318fdb03d1b7", 00:12:27.331 "is_configured": true, 00:12:27.331 "data_offset": 2048, 00:12:27.331 "data_size": 63488 00:12:27.331 }, 00:12:27.331 { 00:12:27.331 "name": "BaseBdev2", 00:12:27.331 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:27.331 "is_configured": true, 00:12:27.331 "data_offset": 2048, 00:12:27.331 "data_size": 63488 00:12:27.331 } 00:12:27.331 ] 00:12:27.331 }' 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.331 02:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.899 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.900 [2024-10-13 02:26:46.341411] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.900 [2024-10-13 02:26:46.341533] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.900 [2024-10-13 02:26:46.341647] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.900 [2024-10-13 02:26:46.341745] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.900 [2024-10-13 02:26:46.341793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:27.900 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:28.159 /dev/nbd0 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.159 1+0 records in 00:12:28.159 1+0 records out 00:12:28.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497257 s, 8.2 MB/s 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:28.159 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:28.418 /dev/nbd1 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.418 1+0 records in 00:12:28.418 1+0 records out 00:12:28.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287717 s, 14.2 MB/s 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:28.418 02:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:28.418 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:28.418 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:28.418 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:28.418 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:28.418 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:28.418 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:28.419 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:28.678 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:28.678 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:28.678 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:28.678 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.678 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.678 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:28.678 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:28.678 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.678 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:28.678 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.938 [2024-10-13 02:26:47.454829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:28.938 [2024-10-13 02:26:47.454983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.938 [2024-10-13 02:26:47.455006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:28.938 [2024-10-13 02:26:47.455019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.938 [2024-10-13 02:26:47.457131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.938 [2024-10-13 02:26:47.457172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:28.938 [2024-10-13 02:26:47.457241] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:28.938 [2024-10-13 02:26:47.457285] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:28.938 [2024-10-13 02:26:47.457392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:28.938 spare 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.938 [2024-10-13 02:26:47.557287] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:12:28.938 [2024-10-13 02:26:47.557311] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:28.938 [2024-10-13 02:26:47.557555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cae960 00:12:28.938 [2024-10-13 02:26:47.557692] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:12:28.938 [2024-10-13 02:26:47.557708] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:12:28.938 [2024-10-13 02:26:47.557816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.938 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.938 "name": "raid_bdev1", 00:12:28.938 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:28.938 "strip_size_kb": 0, 00:12:28.938 "state": "online", 00:12:28.938 "raid_level": "raid1", 00:12:28.938 "superblock": true, 00:12:28.938 "num_base_bdevs": 2, 00:12:28.938 "num_base_bdevs_discovered": 2, 00:12:28.938 "num_base_bdevs_operational": 2, 00:12:28.939 "base_bdevs_list": [ 00:12:28.939 { 00:12:28.939 "name": "spare", 00:12:28.939 "uuid": "ae814411-66bc-5f99-b5a7-318fdb03d1b7", 00:12:28.939 "is_configured": true, 00:12:28.939 "data_offset": 2048, 00:12:28.939 "data_size": 63488 00:12:28.939 }, 00:12:28.939 { 00:12:28.939 "name": "BaseBdev2", 00:12:28.939 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:28.939 "is_configured": true, 00:12:28.939 "data_offset": 2048, 00:12:28.939 "data_size": 63488 00:12:28.939 } 00:12:28.939 ] 00:12:28.939 }' 00:12:28.939 02:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.939 02:26:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.508 "name": "raid_bdev1", 00:12:29.508 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:29.508 "strip_size_kb": 0, 00:12:29.508 "state": "online", 00:12:29.508 "raid_level": "raid1", 00:12:29.508 "superblock": true, 00:12:29.508 "num_base_bdevs": 2, 00:12:29.508 "num_base_bdevs_discovered": 2, 00:12:29.508 "num_base_bdevs_operational": 2, 00:12:29.508 "base_bdevs_list": [ 00:12:29.508 { 00:12:29.508 "name": "spare", 00:12:29.508 "uuid": "ae814411-66bc-5f99-b5a7-318fdb03d1b7", 00:12:29.508 "is_configured": true, 00:12:29.508 "data_offset": 2048, 00:12:29.508 "data_size": 63488 00:12:29.508 }, 00:12:29.508 { 00:12:29.508 "name": "BaseBdev2", 00:12:29.508 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:29.508 "is_configured": true, 00:12:29.508 "data_offset": 2048, 00:12:29.508 "data_size": 63488 00:12:29.508 } 00:12:29.508 ] 00:12:29.508 }' 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.508 02:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.768 [2024-10-13 02:26:48.193635] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.768 "name": "raid_bdev1", 00:12:29.768 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:29.768 "strip_size_kb": 0, 00:12:29.768 "state": "online", 00:12:29.768 "raid_level": "raid1", 00:12:29.768 "superblock": true, 00:12:29.768 "num_base_bdevs": 2, 00:12:29.768 "num_base_bdevs_discovered": 1, 00:12:29.768 "num_base_bdevs_operational": 1, 00:12:29.768 "base_bdevs_list": [ 00:12:29.768 { 00:12:29.768 "name": null, 00:12:29.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.768 "is_configured": false, 00:12:29.768 "data_offset": 0, 00:12:29.768 "data_size": 63488 00:12:29.768 }, 00:12:29.768 { 00:12:29.768 "name": "BaseBdev2", 00:12:29.768 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:29.768 "is_configured": true, 00:12:29.768 "data_offset": 2048, 00:12:29.768 "data_size": 63488 00:12:29.768 } 00:12:29.768 ] 00:12:29.768 }' 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.768 02:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.028 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:30.028 02:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.029 02:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.029 [2024-10-13 02:26:48.640945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:30.029 [2024-10-13 02:26:48.641230] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:30.029 [2024-10-13 02:26:48.641288] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:30.029 [2024-10-13 02:26:48.641355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:30.029 [2024-10-13 02:26:48.645381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caea30 00:12:30.029 02:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.029 02:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:30.029 [2024-10-13 02:26:48.647287] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.410 "name": "raid_bdev1", 00:12:31.410 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:31.410 "strip_size_kb": 0, 00:12:31.410 "state": "online", 00:12:31.410 "raid_level": "raid1", 00:12:31.410 "superblock": true, 00:12:31.410 "num_base_bdevs": 2, 00:12:31.410 "num_base_bdevs_discovered": 2, 00:12:31.410 "num_base_bdevs_operational": 2, 00:12:31.410 "process": { 00:12:31.410 "type": "rebuild", 00:12:31.410 "target": "spare", 00:12:31.410 "progress": { 00:12:31.410 "blocks": 20480, 00:12:31.410 "percent": 32 00:12:31.410 } 00:12:31.410 }, 00:12:31.410 "base_bdevs_list": [ 00:12:31.410 { 00:12:31.410 "name": "spare", 00:12:31.410 "uuid": "ae814411-66bc-5f99-b5a7-318fdb03d1b7", 00:12:31.410 "is_configured": true, 00:12:31.410 "data_offset": 2048, 00:12:31.410 "data_size": 63488 00:12:31.410 }, 00:12:31.410 { 00:12:31.410 "name": "BaseBdev2", 00:12:31.410 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:31.410 "is_configured": true, 00:12:31.410 "data_offset": 2048, 00:12:31.410 "data_size": 63488 00:12:31.410 } 00:12:31.410 ] 00:12:31.410 }' 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.410 [2024-10-13 02:26:49.768069] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:31.410 [2024-10-13 02:26:49.851562] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:31.410 [2024-10-13 02:26:49.851668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.410 [2024-10-13 02:26:49.851704] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:31.410 [2024-10-13 02:26:49.851726] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.410 "name": "raid_bdev1", 00:12:31.410 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:31.410 "strip_size_kb": 0, 00:12:31.410 "state": "online", 00:12:31.410 "raid_level": "raid1", 00:12:31.410 "superblock": true, 00:12:31.410 "num_base_bdevs": 2, 00:12:31.410 "num_base_bdevs_discovered": 1, 00:12:31.410 "num_base_bdevs_operational": 1, 00:12:31.410 "base_bdevs_list": [ 00:12:31.410 { 00:12:31.410 "name": null, 00:12:31.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.410 "is_configured": false, 00:12:31.410 "data_offset": 0, 00:12:31.410 "data_size": 63488 00:12:31.410 }, 00:12:31.410 { 00:12:31.410 "name": "BaseBdev2", 00:12:31.410 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:31.410 "is_configured": true, 00:12:31.410 "data_offset": 2048, 00:12:31.410 "data_size": 63488 00:12:31.410 } 00:12:31.410 ] 00:12:31.410 }' 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.410 02:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.670 02:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:31.670 02:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.670 02:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.670 [2024-10-13 02:26:50.279291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:31.670 [2024-10-13 02:26:50.279422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.670 [2024-10-13 02:26:50.279465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:31.670 [2024-10-13 02:26:50.279496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.670 [2024-10-13 02:26:50.279944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.670 [2024-10-13 02:26:50.280010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:31.670 [2024-10-13 02:26:50.280126] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:31.670 [2024-10-13 02:26:50.280164] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:31.670 [2024-10-13 02:26:50.280213] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:31.670 [2024-10-13 02:26:50.280305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:31.670 [2024-10-13 02:26:50.284180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:12:31.670 spare 00:12:31.670 02:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.670 02:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:31.670 [2024-10-13 02:26:50.286023] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:33.053 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.053 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.053 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.053 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.053 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.053 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.053 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.053 02:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.053 02:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.053 02:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.053 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.053 "name": "raid_bdev1", 00:12:33.053 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:33.053 "strip_size_kb": 0, 00:12:33.053 "state": "online", 00:12:33.053 "raid_level": "raid1", 00:12:33.053 "superblock": true, 00:12:33.053 "num_base_bdevs": 2, 00:12:33.053 "num_base_bdevs_discovered": 2, 00:12:33.053 "num_base_bdevs_operational": 2, 00:12:33.053 "process": { 00:12:33.053 "type": "rebuild", 00:12:33.053 "target": "spare", 00:12:33.053 "progress": { 00:12:33.053 "blocks": 20480, 00:12:33.053 "percent": 32 00:12:33.053 } 00:12:33.053 }, 00:12:33.053 "base_bdevs_list": [ 00:12:33.053 { 00:12:33.053 "name": "spare", 00:12:33.053 "uuid": "ae814411-66bc-5f99-b5a7-318fdb03d1b7", 00:12:33.053 "is_configured": true, 00:12:33.053 "data_offset": 2048, 00:12:33.054 "data_size": 63488 00:12:33.054 }, 00:12:33.054 { 00:12:33.054 "name": "BaseBdev2", 00:12:33.054 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:33.054 "is_configured": true, 00:12:33.054 "data_offset": 2048, 00:12:33.054 "data_size": 63488 00:12:33.054 } 00:12:33.054 ] 00:12:33.054 }' 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.054 [2024-10-13 02:26:51.450779] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:33.054 [2024-10-13 02:26:51.490252] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:33.054 [2024-10-13 02:26:51.490311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.054 [2024-10-13 02:26:51.490324] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:33.054 [2024-10-13 02:26:51.490334] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.054 "name": "raid_bdev1", 00:12:33.054 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:33.054 "strip_size_kb": 0, 00:12:33.054 "state": "online", 00:12:33.054 "raid_level": "raid1", 00:12:33.054 "superblock": true, 00:12:33.054 "num_base_bdevs": 2, 00:12:33.054 "num_base_bdevs_discovered": 1, 00:12:33.054 "num_base_bdevs_operational": 1, 00:12:33.054 "base_bdevs_list": [ 00:12:33.054 { 00:12:33.054 "name": null, 00:12:33.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.054 "is_configured": false, 00:12:33.054 "data_offset": 0, 00:12:33.054 "data_size": 63488 00:12:33.054 }, 00:12:33.054 { 00:12:33.054 "name": "BaseBdev2", 00:12:33.054 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:33.054 "is_configured": true, 00:12:33.054 "data_offset": 2048, 00:12:33.054 "data_size": 63488 00:12:33.054 } 00:12:33.054 ] 00:12:33.054 }' 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.054 02:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.314 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:33.314 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.314 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:33.314 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:33.314 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.314 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.314 02:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.314 02:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.314 02:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.314 02:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.574 02:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.574 "name": "raid_bdev1", 00:12:33.574 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:33.574 "strip_size_kb": 0, 00:12:33.574 "state": "online", 00:12:33.574 "raid_level": "raid1", 00:12:33.574 "superblock": true, 00:12:33.574 "num_base_bdevs": 2, 00:12:33.574 "num_base_bdevs_discovered": 1, 00:12:33.574 "num_base_bdevs_operational": 1, 00:12:33.574 "base_bdevs_list": [ 00:12:33.574 { 00:12:33.574 "name": null, 00:12:33.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.574 "is_configured": false, 00:12:33.574 "data_offset": 0, 00:12:33.574 "data_size": 63488 00:12:33.574 }, 00:12:33.574 { 00:12:33.574 "name": "BaseBdev2", 00:12:33.574 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:33.574 "is_configured": true, 00:12:33.574 "data_offset": 2048, 00:12:33.574 "data_size": 63488 00:12:33.574 } 00:12:33.574 ] 00:12:33.574 }' 00:12:33.574 02:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.574 02:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:33.574 02:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.574 02:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:33.574 02:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:33.574 02:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.574 02:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.574 02:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.574 02:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:33.574 02:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.574 02:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.574 [2024-10-13 02:26:52.105394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:33.574 [2024-10-13 02:26:52.105474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.574 [2024-10-13 02:26:52.105494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:33.574 [2024-10-13 02:26:52.105505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.574 [2024-10-13 02:26:52.105930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.574 [2024-10-13 02:26:52.105953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:33.574 [2024-10-13 02:26:52.106030] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:33.574 [2024-10-13 02:26:52.106048] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:33.574 [2024-10-13 02:26:52.106057] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:33.574 [2024-10-13 02:26:52.106068] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:33.574 BaseBdev1 00:12:33.574 02:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.574 02:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:34.514 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:34.514 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.514 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.514 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.514 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.514 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:34.514 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.514 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.514 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.514 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.514 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.514 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.514 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.514 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.514 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.514 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.514 "name": "raid_bdev1", 00:12:34.514 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:34.514 "strip_size_kb": 0, 00:12:34.515 "state": "online", 00:12:34.515 "raid_level": "raid1", 00:12:34.515 "superblock": true, 00:12:34.515 "num_base_bdevs": 2, 00:12:34.515 "num_base_bdevs_discovered": 1, 00:12:34.515 "num_base_bdevs_operational": 1, 00:12:34.515 "base_bdevs_list": [ 00:12:34.515 { 00:12:34.515 "name": null, 00:12:34.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.515 "is_configured": false, 00:12:34.515 "data_offset": 0, 00:12:34.515 "data_size": 63488 00:12:34.515 }, 00:12:34.515 { 00:12:34.515 "name": "BaseBdev2", 00:12:34.515 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:34.515 "is_configured": true, 00:12:34.515 "data_offset": 2048, 00:12:34.515 "data_size": 63488 00:12:34.515 } 00:12:34.515 ] 00:12:34.515 }' 00:12:34.515 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.515 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.083 "name": "raid_bdev1", 00:12:35.083 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:35.083 "strip_size_kb": 0, 00:12:35.083 "state": "online", 00:12:35.083 "raid_level": "raid1", 00:12:35.083 "superblock": true, 00:12:35.083 "num_base_bdevs": 2, 00:12:35.083 "num_base_bdevs_discovered": 1, 00:12:35.083 "num_base_bdevs_operational": 1, 00:12:35.083 "base_bdevs_list": [ 00:12:35.083 { 00:12:35.083 "name": null, 00:12:35.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.083 "is_configured": false, 00:12:35.083 "data_offset": 0, 00:12:35.083 "data_size": 63488 00:12:35.083 }, 00:12:35.083 { 00:12:35.083 "name": "BaseBdev2", 00:12:35.083 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:35.083 "is_configured": true, 00:12:35.083 "data_offset": 2048, 00:12:35.083 "data_size": 63488 00:12:35.083 } 00:12:35.083 ] 00:12:35.083 }' 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.083 [2024-10-13 02:26:53.718823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.083 [2024-10-13 02:26:53.719059] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:35.083 [2024-10-13 02:26:53.719123] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:35.083 request: 00:12:35.083 { 00:12:35.083 "base_bdev": "BaseBdev1", 00:12:35.083 "raid_bdev": "raid_bdev1", 00:12:35.083 "method": "bdev_raid_add_base_bdev", 00:12:35.083 "req_id": 1 00:12:35.083 } 00:12:35.083 Got JSON-RPC error response 00:12:35.083 response: 00:12:35.083 { 00:12:35.083 "code": -22, 00:12:35.083 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:35.083 } 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:35.083 02:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:36.464 02:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:36.465 02:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.465 02:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.465 02:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.465 02:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.465 02:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:36.465 02:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.465 02:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.465 02:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.465 02:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.465 02:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.465 02:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.465 02:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.465 02:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.465 02:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.465 02:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.465 "name": "raid_bdev1", 00:12:36.465 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:36.465 "strip_size_kb": 0, 00:12:36.465 "state": "online", 00:12:36.465 "raid_level": "raid1", 00:12:36.465 "superblock": true, 00:12:36.465 "num_base_bdevs": 2, 00:12:36.465 "num_base_bdevs_discovered": 1, 00:12:36.465 "num_base_bdevs_operational": 1, 00:12:36.465 "base_bdevs_list": [ 00:12:36.465 { 00:12:36.465 "name": null, 00:12:36.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.465 "is_configured": false, 00:12:36.465 "data_offset": 0, 00:12:36.465 "data_size": 63488 00:12:36.465 }, 00:12:36.465 { 00:12:36.465 "name": "BaseBdev2", 00:12:36.465 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:36.465 "is_configured": true, 00:12:36.465 "data_offset": 2048, 00:12:36.465 "data_size": 63488 00:12:36.465 } 00:12:36.465 ] 00:12:36.465 }' 00:12:36.465 02:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.465 02:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.725 "name": "raid_bdev1", 00:12:36.725 "uuid": "57574b27-be23-44f1-abb4-a2a93b76b244", 00:12:36.725 "strip_size_kb": 0, 00:12:36.725 "state": "online", 00:12:36.725 "raid_level": "raid1", 00:12:36.725 "superblock": true, 00:12:36.725 "num_base_bdevs": 2, 00:12:36.725 "num_base_bdevs_discovered": 1, 00:12:36.725 "num_base_bdevs_operational": 1, 00:12:36.725 "base_bdevs_list": [ 00:12:36.725 { 00:12:36.725 "name": null, 00:12:36.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.725 "is_configured": false, 00:12:36.725 "data_offset": 0, 00:12:36.725 "data_size": 63488 00:12:36.725 }, 00:12:36.725 { 00:12:36.725 "name": "BaseBdev2", 00:12:36.725 "uuid": "d79d5c70-3354-530c-807d-e9ff2c0abd3a", 00:12:36.725 "is_configured": true, 00:12:36.725 "data_offset": 2048, 00:12:36.725 "data_size": 63488 00:12:36.725 } 00:12:36.725 ] 00:12:36.725 }' 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86353 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86353 ']' 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86353 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86353 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:36.725 killing process with pid 86353 00:12:36.725 Received shutdown signal, test time was about 60.000000 seconds 00:12:36.725 00:12:36.725 Latency(us) 00:12:36.725 [2024-10-13T02:26:55.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.725 [2024-10-13T02:26:55.409Z] =================================================================================================================== 00:12:36.725 [2024-10-13T02:26:55.409Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86353' 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86353 00:12:36.725 [2024-10-13 02:26:55.391610] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:36.725 [2024-10-13 02:26:55.391761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:36.725 02:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86353 00:12:36.725 [2024-10-13 02:26:55.391818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:36.725 [2024-10-13 02:26:55.391828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:12:36.985 [2024-10-13 02:26:55.423524] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:36.985 02:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:36.985 ************************************ 00:12:36.985 END TEST raid_rebuild_test_sb 00:12:36.985 ************************************ 00:12:36.985 00:12:36.985 real 0m21.938s 00:12:36.985 user 0m26.520s 00:12:36.985 sys 0m3.622s 00:12:36.985 02:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:36.985 02:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.245 02:26:55 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:37.245 02:26:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:37.245 02:26:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:37.245 02:26:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:37.245 ************************************ 00:12:37.245 START TEST raid_rebuild_test_io 00:12:37.245 ************************************ 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87073 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87073 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 87073 ']' 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:37.245 02:26:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.245 [2024-10-13 02:26:55.839510] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:37.245 [2024-10-13 02:26:55.839756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87073 ] 00:12:37.245 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:37.245 Zero copy mechanism will not be used. 00:12:37.505 [2024-10-13 02:26:55.988152] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.505 [2024-10-13 02:26:56.032771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.505 [2024-10-13 02:26:56.074477] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.505 [2024-10-13 02:26:56.074598] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.075 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:38.075 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:38.075 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:38.075 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:38.075 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.075 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.075 BaseBdev1_malloc 00:12:38.075 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.075 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:38.075 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.075 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.075 [2024-10-13 02:26:56.704595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:38.075 [2024-10-13 02:26:56.704669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.075 [2024-10-13 02:26:56.704694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:38.075 [2024-10-13 02:26:56.704706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.075 [2024-10-13 02:26:56.706632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.075 [2024-10-13 02:26:56.706706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:38.075 BaseBdev1 00:12:38.075 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.075 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:38.075 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:38.076 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.076 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.076 BaseBdev2_malloc 00:12:38.076 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.076 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:38.076 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.076 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.076 [2024-10-13 02:26:56.754866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:38.076 [2024-10-13 02:26:56.754990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.076 [2024-10-13 02:26:56.755035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:38.076 [2024-10-13 02:26:56.755056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.335 [2024-10-13 02:26:56.759956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.335 [2024-10-13 02:26:56.760026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:38.335 BaseBdev2 00:12:38.335 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.335 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:38.335 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.335 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.335 spare_malloc 00:12:38.335 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.335 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:38.335 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.335 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.335 spare_delay 00:12:38.335 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.335 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:38.335 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.335 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.335 [2024-10-13 02:26:56.798017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:38.335 [2024-10-13 02:26:56.798069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.335 [2024-10-13 02:26:56.798089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:38.335 [2024-10-13 02:26:56.798097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.335 [2024-10-13 02:26:56.800081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.335 [2024-10-13 02:26:56.800116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:38.335 spare 00:12:38.335 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.335 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:38.335 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.335 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.335 [2024-10-13 02:26:56.810042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.335 [2024-10-13 02:26:56.811730] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.335 [2024-10-13 02:26:56.811890] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:38.335 [2024-10-13 02:26:56.811906] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:38.335 [2024-10-13 02:26:56.812149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:38.335 [2024-10-13 02:26:56.812284] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:38.336 [2024-10-13 02:26:56.812295] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:38.336 [2024-10-13 02:26:56.812403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.336 "name": "raid_bdev1", 00:12:38.336 "uuid": "b89a1083-a518-455c-ade2-0abbc2fffe04", 00:12:38.336 "strip_size_kb": 0, 00:12:38.336 "state": "online", 00:12:38.336 "raid_level": "raid1", 00:12:38.336 "superblock": false, 00:12:38.336 "num_base_bdevs": 2, 00:12:38.336 "num_base_bdevs_discovered": 2, 00:12:38.336 "num_base_bdevs_operational": 2, 00:12:38.336 "base_bdevs_list": [ 00:12:38.336 { 00:12:38.336 "name": "BaseBdev1", 00:12:38.336 "uuid": "7294c5f6-b3f5-5880-a183-64a956350eca", 00:12:38.336 "is_configured": true, 00:12:38.336 "data_offset": 0, 00:12:38.336 "data_size": 65536 00:12:38.336 }, 00:12:38.336 { 00:12:38.336 "name": "BaseBdev2", 00:12:38.336 "uuid": "2611a8f1-a0e1-585d-89bf-a742c2c2651a", 00:12:38.336 "is_configured": true, 00:12:38.336 "data_offset": 0, 00:12:38.336 "data_size": 65536 00:12:38.336 } 00:12:38.336 ] 00:12:38.336 }' 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.336 02:26:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.596 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:38.596 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:38.596 02:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.596 02:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.596 [2024-10-13 02:26:57.237581] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.596 02:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.596 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.855 [2024-10-13 02:26:57.313214] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.855 "name": "raid_bdev1", 00:12:38.855 "uuid": "b89a1083-a518-455c-ade2-0abbc2fffe04", 00:12:38.855 "strip_size_kb": 0, 00:12:38.855 "state": "online", 00:12:38.855 "raid_level": "raid1", 00:12:38.855 "superblock": false, 00:12:38.855 "num_base_bdevs": 2, 00:12:38.855 "num_base_bdevs_discovered": 1, 00:12:38.855 "num_base_bdevs_operational": 1, 00:12:38.855 "base_bdevs_list": [ 00:12:38.855 { 00:12:38.855 "name": null, 00:12:38.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.855 "is_configured": false, 00:12:38.855 "data_offset": 0, 00:12:38.855 "data_size": 65536 00:12:38.855 }, 00:12:38.855 { 00:12:38.855 "name": "BaseBdev2", 00:12:38.855 "uuid": "2611a8f1-a0e1-585d-89bf-a742c2c2651a", 00:12:38.855 "is_configured": true, 00:12:38.855 "data_offset": 0, 00:12:38.855 "data_size": 65536 00:12:38.855 } 00:12:38.855 ] 00:12:38.855 }' 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.855 02:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.855 [2024-10-13 02:26:57.383068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:12:38.855 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:38.855 Zero copy mechanism will not be used. 00:12:38.855 Running I/O for 60 seconds... 00:12:39.115 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:39.115 02:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.115 02:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.115 [2024-10-13 02:26:57.772791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:39.377 02:26:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.377 02:26:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:39.377 [2024-10-13 02:26:57.808990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:12:39.377 [2024-10-13 02:26:57.810919] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:39.377 [2024-10-13 02:26:57.923790] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:39.377 [2024-10-13 02:26:57.924210] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:39.377 [2024-10-13 02:26:58.058716] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:39.637 [2024-10-13 02:26:58.059034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:39.896 196.00 IOPS, 588.00 MiB/s [2024-10-13T02:26:58.580Z] [2024-10-13 02:26:58.400309] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:40.156 [2024-10-13 02:26:58.623817] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:40.156 [2024-10-13 02:26:58.624095] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:40.156 02:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.156 02:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.156 02:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.156 02:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.156 02:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.156 02:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.156 02:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.156 02:26:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.156 02:26:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.416 02:26:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.416 02:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.416 "name": "raid_bdev1", 00:12:40.416 "uuid": "b89a1083-a518-455c-ade2-0abbc2fffe04", 00:12:40.416 "strip_size_kb": 0, 00:12:40.416 "state": "online", 00:12:40.416 "raid_level": "raid1", 00:12:40.416 "superblock": false, 00:12:40.416 "num_base_bdevs": 2, 00:12:40.416 "num_base_bdevs_discovered": 2, 00:12:40.416 "num_base_bdevs_operational": 2, 00:12:40.416 "process": { 00:12:40.416 "type": "rebuild", 00:12:40.416 "target": "spare", 00:12:40.416 "progress": { 00:12:40.416 "blocks": 10240, 00:12:40.416 "percent": 15 00:12:40.416 } 00:12:40.416 }, 00:12:40.416 "base_bdevs_list": [ 00:12:40.416 { 00:12:40.416 "name": "spare", 00:12:40.416 "uuid": "23df6c73-0914-599e-8941-b253589d332d", 00:12:40.416 "is_configured": true, 00:12:40.416 "data_offset": 0, 00:12:40.416 "data_size": 65536 00:12:40.416 }, 00:12:40.416 { 00:12:40.416 "name": "BaseBdev2", 00:12:40.416 "uuid": "2611a8f1-a0e1-585d-89bf-a742c2c2651a", 00:12:40.416 "is_configured": true, 00:12:40.416 "data_offset": 0, 00:12:40.417 "data_size": 65536 00:12:40.417 } 00:12:40.417 ] 00:12:40.417 }' 00:12:40.417 02:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.417 02:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.417 02:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.417 02:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.417 [2024-10-13 02:26:58.953583] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:40.417 [2024-10-13 02:26:58.953992] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:40.417 02:26:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:40.417 02:26:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.417 02:26:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.417 [2024-10-13 02:26:58.966249] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.417 [2024-10-13 02:26:59.072977] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:40.417 [2024-10-13 02:26:59.079855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.417 [2024-10-13 02:26:59.079898] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.417 [2024-10-13 02:26:59.079912] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:40.417 [2024-10-13 02:26:59.090823] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.677 "name": "raid_bdev1", 00:12:40.677 "uuid": "b89a1083-a518-455c-ade2-0abbc2fffe04", 00:12:40.677 "strip_size_kb": 0, 00:12:40.677 "state": "online", 00:12:40.677 "raid_level": "raid1", 00:12:40.677 "superblock": false, 00:12:40.677 "num_base_bdevs": 2, 00:12:40.677 "num_base_bdevs_discovered": 1, 00:12:40.677 "num_base_bdevs_operational": 1, 00:12:40.677 "base_bdevs_list": [ 00:12:40.677 { 00:12:40.677 "name": null, 00:12:40.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.677 "is_configured": false, 00:12:40.677 "data_offset": 0, 00:12:40.677 "data_size": 65536 00:12:40.677 }, 00:12:40.677 { 00:12:40.677 "name": "BaseBdev2", 00:12:40.677 "uuid": "2611a8f1-a0e1-585d-89bf-a742c2c2651a", 00:12:40.677 "is_configured": true, 00:12:40.677 "data_offset": 0, 00:12:40.677 "data_size": 65536 00:12:40.677 } 00:12:40.677 ] 00:12:40.677 }' 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.677 02:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.937 185.00 IOPS, 555.00 MiB/s [2024-10-13T02:26:59.621Z] 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:40.937 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.937 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:40.937 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:40.937 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.937 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.937 02:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.937 02:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.937 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.937 02:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.937 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.937 "name": "raid_bdev1", 00:12:40.937 "uuid": "b89a1083-a518-455c-ade2-0abbc2fffe04", 00:12:40.937 "strip_size_kb": 0, 00:12:40.937 "state": "online", 00:12:40.937 "raid_level": "raid1", 00:12:40.937 "superblock": false, 00:12:40.937 "num_base_bdevs": 2, 00:12:40.937 "num_base_bdevs_discovered": 1, 00:12:40.937 "num_base_bdevs_operational": 1, 00:12:40.937 "base_bdevs_list": [ 00:12:40.937 { 00:12:40.938 "name": null, 00:12:40.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.938 "is_configured": false, 00:12:40.938 "data_offset": 0, 00:12:40.938 "data_size": 65536 00:12:40.938 }, 00:12:40.938 { 00:12:40.938 "name": "BaseBdev2", 00:12:40.938 "uuid": "2611a8f1-a0e1-585d-89bf-a742c2c2651a", 00:12:40.938 "is_configured": true, 00:12:40.938 "data_offset": 0, 00:12:40.938 "data_size": 65536 00:12:40.938 } 00:12:40.938 ] 00:12:40.938 }' 00:12:40.938 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.197 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:41.197 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.197 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:41.197 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:41.197 02:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.197 02:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.197 [2024-10-13 02:26:59.684256] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:41.197 02:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.197 02:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:41.198 [2024-10-13 02:26:59.721968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:41.198 [2024-10-13 02:26:59.723950] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:41.457 [2024-10-13 02:26:59.970088] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:41.457 [2024-10-13 02:26:59.970482] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:41.717 [2024-10-13 02:27:00.299858] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:41.717 [2024-10-13 02:27:00.306201] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:42.286 187.33 IOPS, 562.00 MiB/s [2024-10-13T02:27:00.970Z] 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.286 [2024-10-13 02:27:00.765033] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.286 "name": "raid_bdev1", 00:12:42.286 "uuid": "b89a1083-a518-455c-ade2-0abbc2fffe04", 00:12:42.286 "strip_size_kb": 0, 00:12:42.286 "state": "online", 00:12:42.286 "raid_level": "raid1", 00:12:42.286 "superblock": false, 00:12:42.286 "num_base_bdevs": 2, 00:12:42.286 "num_base_bdevs_discovered": 2, 00:12:42.286 "num_base_bdevs_operational": 2, 00:12:42.286 "process": { 00:12:42.286 "type": "rebuild", 00:12:42.286 "target": "spare", 00:12:42.286 "progress": { 00:12:42.286 "blocks": 12288, 00:12:42.286 "percent": 18 00:12:42.286 } 00:12:42.286 }, 00:12:42.286 "base_bdevs_list": [ 00:12:42.286 { 00:12:42.286 "name": "spare", 00:12:42.286 "uuid": "23df6c73-0914-599e-8941-b253589d332d", 00:12:42.286 "is_configured": true, 00:12:42.286 "data_offset": 0, 00:12:42.286 "data_size": 65536 00:12:42.286 }, 00:12:42.286 { 00:12:42.286 "name": "BaseBdev2", 00:12:42.286 "uuid": "2611a8f1-a0e1-585d-89bf-a742c2c2651a", 00:12:42.286 "is_configured": true, 00:12:42.286 "data_offset": 0, 00:12:42.286 "data_size": 65536 00:12:42.286 } 00:12:42.286 ] 00:12:42.286 }' 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=328 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.286 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.286 "name": "raid_bdev1", 00:12:42.286 "uuid": "b89a1083-a518-455c-ade2-0abbc2fffe04", 00:12:42.286 "strip_size_kb": 0, 00:12:42.286 "state": "online", 00:12:42.287 "raid_level": "raid1", 00:12:42.287 "superblock": false, 00:12:42.287 "num_base_bdevs": 2, 00:12:42.287 "num_base_bdevs_discovered": 2, 00:12:42.287 "num_base_bdevs_operational": 2, 00:12:42.287 "process": { 00:12:42.287 "type": "rebuild", 00:12:42.287 "target": "spare", 00:12:42.287 "progress": { 00:12:42.287 "blocks": 14336, 00:12:42.287 "percent": 21 00:12:42.287 } 00:12:42.287 }, 00:12:42.287 "base_bdevs_list": [ 00:12:42.287 { 00:12:42.287 "name": "spare", 00:12:42.287 "uuid": "23df6c73-0914-599e-8941-b253589d332d", 00:12:42.287 "is_configured": true, 00:12:42.287 "data_offset": 0, 00:12:42.287 "data_size": 65536 00:12:42.287 }, 00:12:42.287 { 00:12:42.287 "name": "BaseBdev2", 00:12:42.287 "uuid": "2611a8f1-a0e1-585d-89bf-a742c2c2651a", 00:12:42.287 "is_configured": true, 00:12:42.287 "data_offset": 0, 00:12:42.287 "data_size": 65536 00:12:42.287 } 00:12:42.287 ] 00:12:42.287 }' 00:12:42.287 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.287 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.287 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.547 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.547 02:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:42.547 [2024-10-13 02:27:00.993590] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:42.547 [2024-10-13 02:27:01.224508] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:42.547 [2024-10-13 02:27:01.225097] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:43.377 165.00 IOPS, 495.00 MiB/s [2024-10-13T02:27:02.061Z] [2024-10-13 02:27:01.750464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:43.377 02:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:43.377 02:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.377 [2024-10-13 02:27:01.977923] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:43.377 02:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.377 02:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.377 02:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.377 02:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.377 02:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.377 02:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.377 02:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.377 02:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.377 02:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.377 02:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.377 "name": "raid_bdev1", 00:12:43.377 "uuid": "b89a1083-a518-455c-ade2-0abbc2fffe04", 00:12:43.377 "strip_size_kb": 0, 00:12:43.377 "state": "online", 00:12:43.377 "raid_level": "raid1", 00:12:43.377 "superblock": false, 00:12:43.377 "num_base_bdevs": 2, 00:12:43.377 "num_base_bdevs_discovered": 2, 00:12:43.377 "num_base_bdevs_operational": 2, 00:12:43.377 "process": { 00:12:43.377 "type": "rebuild", 00:12:43.377 "target": "spare", 00:12:43.377 "progress": { 00:12:43.377 "blocks": 32768, 00:12:43.377 "percent": 50 00:12:43.377 } 00:12:43.377 }, 00:12:43.377 "base_bdevs_list": [ 00:12:43.377 { 00:12:43.377 "name": "spare", 00:12:43.377 "uuid": "23df6c73-0914-599e-8941-b253589d332d", 00:12:43.377 "is_configured": true, 00:12:43.377 "data_offset": 0, 00:12:43.377 "data_size": 65536 00:12:43.377 }, 00:12:43.377 { 00:12:43.377 "name": "BaseBdev2", 00:12:43.377 "uuid": "2611a8f1-a0e1-585d-89bf-a742c2c2651a", 00:12:43.377 "is_configured": true, 00:12:43.377 "data_offset": 0, 00:12:43.377 "data_size": 65536 00:12:43.377 } 00:12:43.377 ] 00:12:43.377 }' 00:12:43.377 02:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.637 02:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:43.637 02:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.637 02:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:43.637 02:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:43.896 141.20 IOPS, 423.60 MiB/s [2024-10-13T02:27:02.580Z] [2024-10-13 02:27:02.554434] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:43.896 [2024-10-13 02:27:02.554674] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:44.502 [2024-10-13 02:27:03.020802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:44.502 02:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:44.502 02:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.502 02:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.502 02:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.502 02:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.502 02:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.502 02:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.502 02:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.502 02:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.502 02:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.502 02:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.502 02:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.502 "name": "raid_bdev1", 00:12:44.502 "uuid": "b89a1083-a518-455c-ade2-0abbc2fffe04", 00:12:44.502 "strip_size_kb": 0, 00:12:44.502 "state": "online", 00:12:44.502 "raid_level": "raid1", 00:12:44.502 "superblock": false, 00:12:44.502 "num_base_bdevs": 2, 00:12:44.502 "num_base_bdevs_discovered": 2, 00:12:44.502 "num_base_bdevs_operational": 2, 00:12:44.502 "process": { 00:12:44.502 "type": "rebuild", 00:12:44.502 "target": "spare", 00:12:44.502 "progress": { 00:12:44.502 "blocks": 47104, 00:12:44.502 "percent": 71 00:12:44.502 } 00:12:44.502 }, 00:12:44.502 "base_bdevs_list": [ 00:12:44.502 { 00:12:44.502 "name": "spare", 00:12:44.502 "uuid": "23df6c73-0914-599e-8941-b253589d332d", 00:12:44.502 "is_configured": true, 00:12:44.502 "data_offset": 0, 00:12:44.502 "data_size": 65536 00:12:44.502 }, 00:12:44.502 { 00:12:44.502 "name": "BaseBdev2", 00:12:44.502 "uuid": "2611a8f1-a0e1-585d-89bf-a742c2c2651a", 00:12:44.502 "is_configured": true, 00:12:44.502 "data_offset": 0, 00:12:44.502 "data_size": 65536 00:12:44.502 } 00:12:44.502 ] 00:12:44.502 }' 00:12:44.502 02:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.762 02:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:44.762 02:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.762 02:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:44.762 02:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:45.333 124.00 IOPS, 372.00 MiB/s [2024-10-13T02:27:04.017Z] [2024-10-13 02:27:03.783284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:45.592 [2024-10-13 02:27:04.209715] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:45.853 02:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:45.853 02:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.853 02:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.853 02:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.853 02:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.853 02:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.853 02:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.853 02:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.853 02:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.853 02:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.853 02:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.853 [2024-10-13 02:27:04.314986] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:45.853 [2024-10-13 02:27:04.317044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.853 02:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.853 "name": "raid_bdev1", 00:12:45.853 "uuid": "b89a1083-a518-455c-ade2-0abbc2fffe04", 00:12:45.853 "strip_size_kb": 0, 00:12:45.853 "state": "online", 00:12:45.853 "raid_level": "raid1", 00:12:45.853 "superblock": false, 00:12:45.853 "num_base_bdevs": 2, 00:12:45.853 "num_base_bdevs_discovered": 2, 00:12:45.853 "num_base_bdevs_operational": 2, 00:12:45.853 "process": { 00:12:45.853 "type": "rebuild", 00:12:45.853 "target": "spare", 00:12:45.853 "progress": { 00:12:45.853 "blocks": 65536, 00:12:45.853 "percent": 100 00:12:45.853 } 00:12:45.853 }, 00:12:45.853 "base_bdevs_list": [ 00:12:45.853 { 00:12:45.853 "name": "spare", 00:12:45.853 "uuid": "23df6c73-0914-599e-8941-b253589d332d", 00:12:45.853 "is_configured": true, 00:12:45.853 "data_offset": 0, 00:12:45.853 "data_size": 65536 00:12:45.853 }, 00:12:45.853 { 00:12:45.853 "name": "BaseBdev2", 00:12:45.853 "uuid": "2611a8f1-a0e1-585d-89bf-a742c2c2651a", 00:12:45.853 "is_configured": true, 00:12:45.853 "data_offset": 0, 00:12:45.853 "data_size": 65536 00:12:45.853 } 00:12:45.853 ] 00:12:45.853 }' 00:12:45.853 02:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.853 111.86 IOPS, 335.57 MiB/s [2024-10-13T02:27:04.537Z] 02:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.853 02:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.853 02:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.853 02:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:46.792 103.12 IOPS, 309.38 MiB/s [2024-10-13T02:27:05.476Z] 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:46.792 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.792 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.792 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.792 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.792 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.792 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.792 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.792 02:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.792 02:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.792 02:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.793 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.793 "name": "raid_bdev1", 00:12:46.793 "uuid": "b89a1083-a518-455c-ade2-0abbc2fffe04", 00:12:46.793 "strip_size_kb": 0, 00:12:46.793 "state": "online", 00:12:46.793 "raid_level": "raid1", 00:12:46.793 "superblock": false, 00:12:46.793 "num_base_bdevs": 2, 00:12:46.793 "num_base_bdevs_discovered": 2, 00:12:46.793 "num_base_bdevs_operational": 2, 00:12:46.793 "base_bdevs_list": [ 00:12:46.793 { 00:12:46.793 "name": "spare", 00:12:46.793 "uuid": "23df6c73-0914-599e-8941-b253589d332d", 00:12:46.793 "is_configured": true, 00:12:46.793 "data_offset": 0, 00:12:46.793 "data_size": 65536 00:12:46.793 }, 00:12:46.793 { 00:12:46.793 "name": "BaseBdev2", 00:12:46.793 "uuid": "2611a8f1-a0e1-585d-89bf-a742c2c2651a", 00:12:46.793 "is_configured": true, 00:12:46.793 "data_offset": 0, 00:12:46.793 "data_size": 65536 00:12:46.793 } 00:12:46.793 ] 00:12:46.793 }' 00:12:46.793 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.053 "name": "raid_bdev1", 00:12:47.053 "uuid": "b89a1083-a518-455c-ade2-0abbc2fffe04", 00:12:47.053 "strip_size_kb": 0, 00:12:47.053 "state": "online", 00:12:47.053 "raid_level": "raid1", 00:12:47.053 "superblock": false, 00:12:47.053 "num_base_bdevs": 2, 00:12:47.053 "num_base_bdevs_discovered": 2, 00:12:47.053 "num_base_bdevs_operational": 2, 00:12:47.053 "base_bdevs_list": [ 00:12:47.053 { 00:12:47.053 "name": "spare", 00:12:47.053 "uuid": "23df6c73-0914-599e-8941-b253589d332d", 00:12:47.053 "is_configured": true, 00:12:47.053 "data_offset": 0, 00:12:47.053 "data_size": 65536 00:12:47.053 }, 00:12:47.053 { 00:12:47.053 "name": "BaseBdev2", 00:12:47.053 "uuid": "2611a8f1-a0e1-585d-89bf-a742c2c2651a", 00:12:47.053 "is_configured": true, 00:12:47.053 "data_offset": 0, 00:12:47.053 "data_size": 65536 00:12:47.053 } 00:12:47.053 ] 00:12:47.053 }' 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.053 02:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.313 02:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.313 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.313 "name": "raid_bdev1", 00:12:47.313 "uuid": "b89a1083-a518-455c-ade2-0abbc2fffe04", 00:12:47.313 "strip_size_kb": 0, 00:12:47.313 "state": "online", 00:12:47.313 "raid_level": "raid1", 00:12:47.313 "superblock": false, 00:12:47.313 "num_base_bdevs": 2, 00:12:47.313 "num_base_bdevs_discovered": 2, 00:12:47.313 "num_base_bdevs_operational": 2, 00:12:47.313 "base_bdevs_list": [ 00:12:47.313 { 00:12:47.313 "name": "spare", 00:12:47.313 "uuid": "23df6c73-0914-599e-8941-b253589d332d", 00:12:47.313 "is_configured": true, 00:12:47.313 "data_offset": 0, 00:12:47.313 "data_size": 65536 00:12:47.313 }, 00:12:47.313 { 00:12:47.313 "name": "BaseBdev2", 00:12:47.313 "uuid": "2611a8f1-a0e1-585d-89bf-a742c2c2651a", 00:12:47.313 "is_configured": true, 00:12:47.313 "data_offset": 0, 00:12:47.313 "data_size": 65536 00:12:47.313 } 00:12:47.313 ] 00:12:47.313 }' 00:12:47.313 02:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.313 02:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.574 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:47.574 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.574 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.574 [2024-10-13 02:27:06.199868] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:47.574 [2024-10-13 02:27:06.199992] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:47.574 00:12:47.574 Latency(us) 00:12:47.574 [2024-10-13T02:27:06.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.574 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:47.574 raid_bdev1 : 8.87 95.81 287.44 0.00 0.00 15661.00 282.61 114015.47 00:12:47.574 [2024-10-13T02:27:06.258Z] =================================================================================================================== 00:12:47.574 [2024-10-13T02:27:06.258Z] Total : 95.81 287.44 0.00 0.00 15661.00 282.61 114015.47 00:12:47.574 { 00:12:47.574 "results": [ 00:12:47.574 { 00:12:47.574 "job": "raid_bdev1", 00:12:47.574 "core_mask": "0x1", 00:12:47.574 "workload": "randrw", 00:12:47.574 "percentage": 50, 00:12:47.574 "status": "finished", 00:12:47.574 "queue_depth": 2, 00:12:47.574 "io_size": 3145728, 00:12:47.574 "runtime": 8.871489, 00:12:47.574 "iops": 95.81255187263378, 00:12:47.574 "mibps": 287.4376556179013, 00:12:47.574 "io_failed": 0, 00:12:47.574 "io_timeout": 0, 00:12:47.574 "avg_latency_us": 15661.001814538915, 00:12:47.574 "min_latency_us": 282.6061135371179, 00:12:47.574 "max_latency_us": 114015.46899563319 00:12:47.574 } 00:12:47.574 ], 00:12:47.574 "core_count": 1 00:12:47.574 } 00:12:47.574 [2024-10-13 02:27:06.242988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.574 [2024-10-13 02:27:06.243033] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:47.574 [2024-10-13 02:27:06.243109] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:47.574 [2024-10-13 02:27:06.243119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:47.574 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.574 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.574 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.574 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:47.574 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.834 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.834 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:47.834 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:47.834 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:47.834 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:47.834 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:47.834 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:47.834 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:47.834 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:47.834 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:47.834 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:47.834 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:47.834 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:47.834 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:47.834 /dev/nbd0 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.095 1+0 records in 00:12:48.095 1+0 records out 00:12:48.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494498 s, 8.3 MB/s 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:48.095 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:48.095 /dev/nbd1 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.355 1+0 records in 00:12:48.355 1+0 records out 00:12:48.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043409 s, 9.4 MB/s 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:48.355 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:48.356 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:48.356 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:48.356 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:48.356 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.356 02:27:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:48.616 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:48.616 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:48.616 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:48.616 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.616 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.616 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:48.616 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:48.616 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.616 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:48.616 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:48.616 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:48.616 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:48.616 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:48.616 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.616 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:48.875 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 87073 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 87073 ']' 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 87073 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87073 00:12:48.876 killing process with pid 87073 00:12:48.876 Received shutdown signal, test time was about 10.041797 seconds 00:12:48.876 00:12:48.876 Latency(us) 00:12:48.876 [2024-10-13T02:27:07.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.876 [2024-10-13T02:27:07.560Z] =================================================================================================================== 00:12:48.876 [2024-10-13T02:27:07.560Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87073' 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 87073 00:12:48.876 [2024-10-13 02:27:07.407797] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:48.876 02:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 87073 00:12:48.876 [2024-10-13 02:27:07.434148] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:49.136 00:12:49.136 real 0m11.925s 00:12:49.136 user 0m15.105s 00:12:49.136 sys 0m1.601s 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.136 ************************************ 00:12:49.136 END TEST raid_rebuild_test_io 00:12:49.136 ************************************ 00:12:49.136 02:27:07 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:49.136 02:27:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:49.136 02:27:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:49.136 02:27:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:49.136 ************************************ 00:12:49.136 START TEST raid_rebuild_test_sb_io 00:12:49.136 ************************************ 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87457 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87457 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 87457 ']' 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:49.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:49.136 02:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.396 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:49.396 Zero copy mechanism will not be used. 00:12:49.396 [2024-10-13 02:27:07.839683] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:49.396 [2024-10-13 02:27:07.839805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87457 ] 00:12:49.397 [2024-10-13 02:27:07.986433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.397 [2024-10-13 02:27:08.030392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.397 [2024-10-13 02:27:08.072249] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.397 [2024-10-13 02:27:08.072290] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.336 BaseBdev1_malloc 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.336 [2024-10-13 02:27:08.718084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:50.336 [2024-10-13 02:27:08.718152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.336 [2024-10-13 02:27:08.718181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:50.336 [2024-10-13 02:27:08.718202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.336 [2024-10-13 02:27:08.720294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.336 [2024-10-13 02:27:08.720333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:50.336 BaseBdev1 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.336 BaseBdev2_malloc 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.336 [2024-10-13 02:27:08.760720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:50.336 [2024-10-13 02:27:08.760842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.336 [2024-10-13 02:27:08.760934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:50.336 [2024-10-13 02:27:08.760964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.336 [2024-10-13 02:27:08.765252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.336 [2024-10-13 02:27:08.765315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:50.336 BaseBdev2 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.336 spare_malloc 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.336 spare_delay 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.336 [2024-10-13 02:27:08.802863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:50.336 [2024-10-13 02:27:08.802924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.336 [2024-10-13 02:27:08.802960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:50.336 [2024-10-13 02:27:08.802968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.336 [2024-10-13 02:27:08.804953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.336 [2024-10-13 02:27:08.804987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:50.336 spare 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.336 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.336 [2024-10-13 02:27:08.814908] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.336 [2024-10-13 02:27:08.816634] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.336 [2024-10-13 02:27:08.816784] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:50.336 [2024-10-13 02:27:08.816810] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:50.336 [2024-10-13 02:27:08.817052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:50.336 [2024-10-13 02:27:08.817174] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:50.336 [2024-10-13 02:27:08.817198] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:50.336 [2024-10-13 02:27:08.817304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.337 "name": "raid_bdev1", 00:12:50.337 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:12:50.337 "strip_size_kb": 0, 00:12:50.337 "state": "online", 00:12:50.337 "raid_level": "raid1", 00:12:50.337 "superblock": true, 00:12:50.337 "num_base_bdevs": 2, 00:12:50.337 "num_base_bdevs_discovered": 2, 00:12:50.337 "num_base_bdevs_operational": 2, 00:12:50.337 "base_bdevs_list": [ 00:12:50.337 { 00:12:50.337 "name": "BaseBdev1", 00:12:50.337 "uuid": "3eecf352-661c-5448-bbe5-0339f1ce0821", 00:12:50.337 "is_configured": true, 00:12:50.337 "data_offset": 2048, 00:12:50.337 "data_size": 63488 00:12:50.337 }, 00:12:50.337 { 00:12:50.337 "name": "BaseBdev2", 00:12:50.337 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:12:50.337 "is_configured": true, 00:12:50.337 "data_offset": 2048, 00:12:50.337 "data_size": 63488 00:12:50.337 } 00:12:50.337 ] 00:12:50.337 }' 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.337 02:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.907 [2024-10-13 02:27:09.302278] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.907 [2024-10-13 02:27:09.389934] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.907 "name": "raid_bdev1", 00:12:50.907 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:12:50.907 "strip_size_kb": 0, 00:12:50.907 "state": "online", 00:12:50.907 "raid_level": "raid1", 00:12:50.907 "superblock": true, 00:12:50.907 "num_base_bdevs": 2, 00:12:50.907 "num_base_bdevs_discovered": 1, 00:12:50.907 "num_base_bdevs_operational": 1, 00:12:50.907 "base_bdevs_list": [ 00:12:50.907 { 00:12:50.907 "name": null, 00:12:50.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.907 "is_configured": false, 00:12:50.907 "data_offset": 0, 00:12:50.907 "data_size": 63488 00:12:50.907 }, 00:12:50.907 { 00:12:50.907 "name": "BaseBdev2", 00:12:50.907 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:12:50.907 "is_configured": true, 00:12:50.907 "data_offset": 2048, 00:12:50.907 "data_size": 63488 00:12:50.907 } 00:12:50.907 ] 00:12:50.907 }' 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.907 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.907 [2024-10-13 02:27:09.539690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:12:50.907 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:50.907 Zero copy mechanism will not be used. 00:12:50.907 Running I/O for 60 seconds... 00:12:51.477 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:51.477 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.477 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.477 [2024-10-13 02:27:09.866389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:51.477 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.477 02:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:51.477 [2024-10-13 02:27:09.912773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:12:51.477 [2024-10-13 02:27:09.914630] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:51.477 [2024-10-13 02:27:10.027045] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:51.477 [2024-10-13 02:27:10.027493] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:51.741 [2024-10-13 02:27:10.245207] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:51.741 [2024-10-13 02:27:10.245421] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:52.002 158.00 IOPS, 474.00 MiB/s [2024-10-13T02:27:10.686Z] [2024-10-13 02:27:10.578469] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:52.261 02:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.261 02:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.261 02:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.261 02:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.261 02:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.261 02:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.261 02:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.261 02:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.261 02:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.261 02:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.261 [2024-10-13 02:27:10.935864] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:52.535 02:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.535 "name": "raid_bdev1", 00:12:52.535 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:12:52.535 "strip_size_kb": 0, 00:12:52.535 "state": "online", 00:12:52.535 "raid_level": "raid1", 00:12:52.535 "superblock": true, 00:12:52.535 "num_base_bdevs": 2, 00:12:52.535 "num_base_bdevs_discovered": 2, 00:12:52.535 "num_base_bdevs_operational": 2, 00:12:52.535 "process": { 00:12:52.535 "type": "rebuild", 00:12:52.535 "target": "spare", 00:12:52.535 "progress": { 00:12:52.535 "blocks": 12288, 00:12:52.535 "percent": 19 00:12:52.535 } 00:12:52.535 }, 00:12:52.535 "base_bdevs_list": [ 00:12:52.535 { 00:12:52.535 "name": "spare", 00:12:52.535 "uuid": "7806143b-ac57-5530-8c8a-96002c7f8385", 00:12:52.535 "is_configured": true, 00:12:52.535 "data_offset": 2048, 00:12:52.535 "data_size": 63488 00:12:52.535 }, 00:12:52.535 { 00:12:52.535 "name": "BaseBdev2", 00:12:52.535 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:12:52.535 "is_configured": true, 00:12:52.535 "data_offset": 2048, 00:12:52.535 "data_size": 63488 00:12:52.535 } 00:12:52.535 ] 00:12:52.535 }' 00:12:52.535 02:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.535 02:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.535 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.535 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.535 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:52.535 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.535 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.535 [2024-10-13 02:27:11.054175] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.535 [2024-10-13 02:27:11.160065] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:52.810 [2024-10-13 02:27:11.266360] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:52.810 [2024-10-13 02:27:11.273763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.810 [2024-10-13 02:27:11.273803] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.810 [2024-10-13 02:27:11.273818] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:52.810 [2024-10-13 02:27:11.290041] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:12:52.810 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.810 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:52.810 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.810 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.810 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.810 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.810 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:52.810 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.810 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.810 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.810 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.810 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.810 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.810 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.810 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.810 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.810 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.810 "name": "raid_bdev1", 00:12:52.810 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:12:52.811 "strip_size_kb": 0, 00:12:52.811 "state": "online", 00:12:52.811 "raid_level": "raid1", 00:12:52.811 "superblock": true, 00:12:52.811 "num_base_bdevs": 2, 00:12:52.811 "num_base_bdevs_discovered": 1, 00:12:52.811 "num_base_bdevs_operational": 1, 00:12:52.811 "base_bdevs_list": [ 00:12:52.811 { 00:12:52.811 "name": null, 00:12:52.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.811 "is_configured": false, 00:12:52.811 "data_offset": 0, 00:12:52.811 "data_size": 63488 00:12:52.811 }, 00:12:52.811 { 00:12:52.811 "name": "BaseBdev2", 00:12:52.811 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:12:52.811 "is_configured": true, 00:12:52.811 "data_offset": 2048, 00:12:52.811 "data_size": 63488 00:12:52.811 } 00:12:52.811 ] 00:12:52.811 }' 00:12:52.811 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.811 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.331 155.00 IOPS, 465.00 MiB/s [2024-10-13T02:27:12.015Z] 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.331 "name": "raid_bdev1", 00:12:53.331 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:12:53.331 "strip_size_kb": 0, 00:12:53.331 "state": "online", 00:12:53.331 "raid_level": "raid1", 00:12:53.331 "superblock": true, 00:12:53.331 "num_base_bdevs": 2, 00:12:53.331 "num_base_bdevs_discovered": 1, 00:12:53.331 "num_base_bdevs_operational": 1, 00:12:53.331 "base_bdevs_list": [ 00:12:53.331 { 00:12:53.331 "name": null, 00:12:53.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.331 "is_configured": false, 00:12:53.331 "data_offset": 0, 00:12:53.331 "data_size": 63488 00:12:53.331 }, 00:12:53.331 { 00:12:53.331 "name": "BaseBdev2", 00:12:53.331 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:12:53.331 "is_configured": true, 00:12:53.331 "data_offset": 2048, 00:12:53.331 "data_size": 63488 00:12:53.331 } 00:12:53.331 ] 00:12:53.331 }' 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.331 [2024-10-13 02:27:11.924518] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.331 02:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:53.331 [2024-10-13 02:27:11.961492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:53.331 [2024-10-13 02:27:11.963364] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:53.591 [2024-10-13 02:27:12.070481] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:53.591 [2024-10-13 02:27:12.070824] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:53.850 [2024-10-13 02:27:12.288955] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:53.850 [2024-10-13 02:27:12.289203] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:54.110 168.00 IOPS, 504.00 MiB/s [2024-10-13T02:27:12.794Z] [2024-10-13 02:27:12.634823] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:54.110 [2024-10-13 02:27:12.756829] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:54.370 02:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.370 02:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.370 02:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.370 02:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.370 02:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.370 02:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.370 02:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.370 02:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.370 02:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.370 02:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.370 02:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.370 "name": "raid_bdev1", 00:12:54.370 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:12:54.370 "strip_size_kb": 0, 00:12:54.370 "state": "online", 00:12:54.370 "raid_level": "raid1", 00:12:54.370 "superblock": true, 00:12:54.370 "num_base_bdevs": 2, 00:12:54.370 "num_base_bdevs_discovered": 2, 00:12:54.370 "num_base_bdevs_operational": 2, 00:12:54.370 "process": { 00:12:54.370 "type": "rebuild", 00:12:54.370 "target": "spare", 00:12:54.370 "progress": { 00:12:54.370 "blocks": 10240, 00:12:54.370 "percent": 16 00:12:54.370 } 00:12:54.370 }, 00:12:54.370 "base_bdevs_list": [ 00:12:54.370 { 00:12:54.370 "name": "spare", 00:12:54.370 "uuid": "7806143b-ac57-5530-8c8a-96002c7f8385", 00:12:54.370 "is_configured": true, 00:12:54.370 "data_offset": 2048, 00:12:54.370 "data_size": 63488 00:12:54.370 }, 00:12:54.370 { 00:12:54.370 "name": "BaseBdev2", 00:12:54.370 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:12:54.370 "is_configured": true, 00:12:54.370 "data_offset": 2048, 00:12:54.370 "data_size": 63488 00:12:54.370 } 00:12:54.370 ] 00:12:54.370 }' 00:12:54.370 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.370 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.370 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:54.630 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=341 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.630 [2024-10-13 02:27:13.097829] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.630 "name": "raid_bdev1", 00:12:54.630 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:12:54.630 "strip_size_kb": 0, 00:12:54.630 "state": "online", 00:12:54.630 "raid_level": "raid1", 00:12:54.630 "superblock": true, 00:12:54.630 "num_base_bdevs": 2, 00:12:54.630 "num_base_bdevs_discovered": 2, 00:12:54.630 "num_base_bdevs_operational": 2, 00:12:54.630 "process": { 00:12:54.630 "type": "rebuild", 00:12:54.630 "target": "spare", 00:12:54.630 "progress": { 00:12:54.630 "blocks": 12288, 00:12:54.630 "percent": 19 00:12:54.630 } 00:12:54.630 }, 00:12:54.630 "base_bdevs_list": [ 00:12:54.630 { 00:12:54.630 "name": "spare", 00:12:54.630 "uuid": "7806143b-ac57-5530-8c8a-96002c7f8385", 00:12:54.630 "is_configured": true, 00:12:54.630 "data_offset": 2048, 00:12:54.630 "data_size": 63488 00:12:54.630 }, 00:12:54.630 { 00:12:54.630 "name": "BaseBdev2", 00:12:54.630 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:12:54.630 "is_configured": true, 00:12:54.630 "data_offset": 2048, 00:12:54.630 "data_size": 63488 00:12:54.630 } 00:12:54.630 ] 00:12:54.630 }' 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.630 02:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:54.890 [2024-10-13 02:27:13.327928] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:54.890 [2024-10-13 02:27:13.328221] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:55.150 149.25 IOPS, 447.75 MiB/s [2024-10-13T02:27:13.834Z] [2024-10-13 02:27:13.645227] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:55.410 [2024-10-13 02:27:13.847102] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:55.670 02:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:55.670 02:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.670 02:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.670 02:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.670 02:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.670 02:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.670 02:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.670 02:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.670 02:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.670 02:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.670 [2024-10-13 02:27:14.230010] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:55.670 02:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.670 02:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.670 "name": "raid_bdev1", 00:12:55.670 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:12:55.670 "strip_size_kb": 0, 00:12:55.670 "state": "online", 00:12:55.670 "raid_level": "raid1", 00:12:55.670 "superblock": true, 00:12:55.670 "num_base_bdevs": 2, 00:12:55.670 "num_base_bdevs_discovered": 2, 00:12:55.670 "num_base_bdevs_operational": 2, 00:12:55.670 "process": { 00:12:55.670 "type": "rebuild", 00:12:55.670 "target": "spare", 00:12:55.670 "progress": { 00:12:55.670 "blocks": 26624, 00:12:55.670 "percent": 41 00:12:55.670 } 00:12:55.670 }, 00:12:55.670 "base_bdevs_list": [ 00:12:55.670 { 00:12:55.671 "name": "spare", 00:12:55.671 "uuid": "7806143b-ac57-5530-8c8a-96002c7f8385", 00:12:55.671 "is_configured": true, 00:12:55.671 "data_offset": 2048, 00:12:55.671 "data_size": 63488 00:12:55.671 }, 00:12:55.671 { 00:12:55.671 "name": "BaseBdev2", 00:12:55.671 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:12:55.671 "is_configured": true, 00:12:55.671 "data_offset": 2048, 00:12:55.671 "data_size": 63488 00:12:55.671 } 00:12:55.671 ] 00:12:55.671 }' 00:12:55.671 02:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.671 02:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.671 02:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.930 02:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.930 02:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:56.190 136.20 IOPS, 408.60 MiB/s [2024-10-13T02:27:14.874Z] [2024-10-13 02:27:14.652264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:56.760 [2024-10-13 02:27:15.222293] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:56.760 02:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:56.760 02:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.760 02:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.760 02:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.760 02:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.760 02:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.760 02:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.760 02:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.760 02:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.760 02:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.760 02:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.760 02:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.760 "name": "raid_bdev1", 00:12:56.760 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:12:56.760 "strip_size_kb": 0, 00:12:56.760 "state": "online", 00:12:56.760 "raid_level": "raid1", 00:12:56.760 "superblock": true, 00:12:56.760 "num_base_bdevs": 2, 00:12:56.760 "num_base_bdevs_discovered": 2, 00:12:56.760 "num_base_bdevs_operational": 2, 00:12:56.760 "process": { 00:12:56.760 "type": "rebuild", 00:12:56.760 "target": "spare", 00:12:56.760 "progress": { 00:12:56.760 "blocks": 45056, 00:12:56.760 "percent": 70 00:12:56.760 } 00:12:56.760 }, 00:12:56.760 "base_bdevs_list": [ 00:12:56.760 { 00:12:56.760 "name": "spare", 00:12:56.760 "uuid": "7806143b-ac57-5530-8c8a-96002c7f8385", 00:12:56.760 "is_configured": true, 00:12:56.760 "data_offset": 2048, 00:12:56.760 "data_size": 63488 00:12:56.760 }, 00:12:56.760 { 00:12:56.760 "name": "BaseBdev2", 00:12:56.760 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:12:56.760 "is_configured": true, 00:12:56.760 "data_offset": 2048, 00:12:56.760 "data_size": 63488 00:12:56.760 } 00:12:56.760 ] 00:12:56.760 }' 00:12:56.760 02:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.020 02:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.020 02:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.020 02:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.020 02:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:57.590 122.83 IOPS, 368.50 MiB/s [2024-10-13T02:27:16.274Z] [2024-10-13 02:27:16.080604] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:57.852 [2024-10-13 02:27:16.407729] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:57.852 [2024-10-13 02:27:16.512824] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:57.852 [2024-10-13 02:27:16.514467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.852 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:57.852 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.852 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.852 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.852 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.852 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.112 109.86 IOPS, 329.57 MiB/s [2024-10-13T02:27:16.796Z] 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.112 "name": "raid_bdev1", 00:12:58.112 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:12:58.112 "strip_size_kb": 0, 00:12:58.112 "state": "online", 00:12:58.112 "raid_level": "raid1", 00:12:58.112 "superblock": true, 00:12:58.112 "num_base_bdevs": 2, 00:12:58.112 "num_base_bdevs_discovered": 2, 00:12:58.112 "num_base_bdevs_operational": 2, 00:12:58.112 "base_bdevs_list": [ 00:12:58.112 { 00:12:58.112 "name": "spare", 00:12:58.112 "uuid": "7806143b-ac57-5530-8c8a-96002c7f8385", 00:12:58.112 "is_configured": true, 00:12:58.112 "data_offset": 2048, 00:12:58.112 "data_size": 63488 00:12:58.112 }, 00:12:58.112 { 00:12:58.112 "name": "BaseBdev2", 00:12:58.112 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:12:58.112 "is_configured": true, 00:12:58.112 "data_offset": 2048, 00:12:58.112 "data_size": 63488 00:12:58.112 } 00:12:58.112 ] 00:12:58.112 }' 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.112 "name": "raid_bdev1", 00:12:58.112 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:12:58.112 "strip_size_kb": 0, 00:12:58.112 "state": "online", 00:12:58.112 "raid_level": "raid1", 00:12:58.112 "superblock": true, 00:12:58.112 "num_base_bdevs": 2, 00:12:58.112 "num_base_bdevs_discovered": 2, 00:12:58.112 "num_base_bdevs_operational": 2, 00:12:58.112 "base_bdevs_list": [ 00:12:58.112 { 00:12:58.112 "name": "spare", 00:12:58.112 "uuid": "7806143b-ac57-5530-8c8a-96002c7f8385", 00:12:58.112 "is_configured": true, 00:12:58.112 "data_offset": 2048, 00:12:58.112 "data_size": 63488 00:12:58.112 }, 00:12:58.112 { 00:12:58.112 "name": "BaseBdev2", 00:12:58.112 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:12:58.112 "is_configured": true, 00:12:58.112 "data_offset": 2048, 00:12:58.112 "data_size": 63488 00:12:58.112 } 00:12:58.112 ] 00:12:58.112 }' 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:58.112 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.371 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:58.371 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:58.371 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.371 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.371 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.371 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.371 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:58.371 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.371 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.371 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.371 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.371 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.371 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.371 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.371 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.371 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.371 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.371 "name": "raid_bdev1", 00:12:58.371 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:12:58.371 "strip_size_kb": 0, 00:12:58.371 "state": "online", 00:12:58.371 "raid_level": "raid1", 00:12:58.371 "superblock": true, 00:12:58.371 "num_base_bdevs": 2, 00:12:58.371 "num_base_bdevs_discovered": 2, 00:12:58.371 "num_base_bdevs_operational": 2, 00:12:58.371 "base_bdevs_list": [ 00:12:58.371 { 00:12:58.371 "name": "spare", 00:12:58.372 "uuid": "7806143b-ac57-5530-8c8a-96002c7f8385", 00:12:58.372 "is_configured": true, 00:12:58.372 "data_offset": 2048, 00:12:58.372 "data_size": 63488 00:12:58.372 }, 00:12:58.372 { 00:12:58.372 "name": "BaseBdev2", 00:12:58.372 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:12:58.372 "is_configured": true, 00:12:58.372 "data_offset": 2048, 00:12:58.372 "data_size": 63488 00:12:58.372 } 00:12:58.372 ] 00:12:58.372 }' 00:12:58.372 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.372 02:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.632 [2024-10-13 02:27:17.220308] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:58.632 [2024-10-13 02:27:17.220347] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.632 00:12:58.632 Latency(us) 00:12:58.632 [2024-10-13T02:27:17.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.632 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:58.632 raid_bdev1 : 7.74 102.10 306.30 0.00 0.00 13656.19 257.57 113099.68 00:12:58.632 [2024-10-13T02:27:17.316Z] =================================================================================================================== 00:12:58.632 [2024-10-13T02:27:17.316Z] Total : 102.10 306.30 0.00 0.00 13656.19 257.57 113099.68 00:12:58.632 [2024-10-13 02:27:17.267189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.632 [2024-10-13 02:27:17.267240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.632 [2024-10-13 02:27:17.267305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.632 [2024-10-13 02:27:17.267320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:58.632 { 00:12:58.632 "results": [ 00:12:58.632 { 00:12:58.632 "job": "raid_bdev1", 00:12:58.632 "core_mask": "0x1", 00:12:58.632 "workload": "randrw", 00:12:58.632 "percentage": 50, 00:12:58.632 "status": "finished", 00:12:58.632 "queue_depth": 2, 00:12:58.632 "io_size": 3145728, 00:12:58.632 "runtime": 7.737402, 00:12:58.632 "iops": 102.10145472601785, 00:12:58.632 "mibps": 306.30436417805356, 00:12:58.632 "io_failed": 0, 00:12:58.632 "io_timeout": 0, 00:12:58.632 "avg_latency_us": 13656.186488309104, 00:12:58.632 "min_latency_us": 257.5650655021834, 00:12:58.632 "max_latency_us": 113099.68209606987 00:12:58.632 } 00:12:58.632 ], 00:12:58.632 "core_count": 1 00:12:58.632 } 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.632 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:58.892 /dev/nbd0 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.892 1+0 records in 00:12:58.892 1+0 records out 00:12:58.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330286 s, 12.4 MB/s 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:58.892 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:58.893 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:58.893 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:58.893 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:58.893 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.893 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:59.153 /dev/nbd1 00:12:59.153 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:59.153 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:59.153 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:59.153 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:59.153 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:59.153 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:59.153 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:59.153 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:59.153 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:59.153 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:59.153 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.153 1+0 records in 00:12:59.153 1+0 records out 00:12:59.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418581 s, 9.8 MB/s 00:12:59.153 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.153 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:59.153 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.419 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:59.419 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:59.419 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.419 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:59.419 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:59.419 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:59.419 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.419 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:59.419 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.419 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:59.419 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.419 02:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.680 [2024-10-13 02:27:18.338269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:59.680 [2024-10-13 02:27:18.338333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.680 [2024-10-13 02:27:18.338354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:59.680 [2024-10-13 02:27:18.338365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.680 [2024-10-13 02:27:18.340447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.680 [2024-10-13 02:27:18.340486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:59.680 [2024-10-13 02:27:18.340563] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:59.680 [2024-10-13 02:27:18.340597] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.680 [2024-10-13 02:27:18.340687] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.680 spare 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.680 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.940 [2024-10-13 02:27:18.440583] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:12:59.940 [2024-10-13 02:27:18.440612] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:59.940 [2024-10-13 02:27:18.440936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027720 00:12:59.940 [2024-10-13 02:27:18.441104] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:12:59.940 [2024-10-13 02:27:18.441124] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:12:59.940 [2024-10-13 02:27:18.441276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.940 "name": "raid_bdev1", 00:12:59.940 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:12:59.940 "strip_size_kb": 0, 00:12:59.940 "state": "online", 00:12:59.940 "raid_level": "raid1", 00:12:59.940 "superblock": true, 00:12:59.940 "num_base_bdevs": 2, 00:12:59.940 "num_base_bdevs_discovered": 2, 00:12:59.940 "num_base_bdevs_operational": 2, 00:12:59.940 "base_bdevs_list": [ 00:12:59.940 { 00:12:59.940 "name": "spare", 00:12:59.940 "uuid": "7806143b-ac57-5530-8c8a-96002c7f8385", 00:12:59.940 "is_configured": true, 00:12:59.940 "data_offset": 2048, 00:12:59.940 "data_size": 63488 00:12:59.940 }, 00:12:59.940 { 00:12:59.940 "name": "BaseBdev2", 00:12:59.940 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:12:59.940 "is_configured": true, 00:12:59.940 "data_offset": 2048, 00:12:59.940 "data_size": 63488 00:12:59.940 } 00:12:59.940 ] 00:12:59.940 }' 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.940 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.200 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:00.200 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.200 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:00.200 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:00.200 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.200 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.200 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.200 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.200 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.459 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.459 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.459 "name": "raid_bdev1", 00:13:00.459 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:13:00.459 "strip_size_kb": 0, 00:13:00.459 "state": "online", 00:13:00.459 "raid_level": "raid1", 00:13:00.459 "superblock": true, 00:13:00.459 "num_base_bdevs": 2, 00:13:00.459 "num_base_bdevs_discovered": 2, 00:13:00.459 "num_base_bdevs_operational": 2, 00:13:00.459 "base_bdevs_list": [ 00:13:00.459 { 00:13:00.459 "name": "spare", 00:13:00.459 "uuid": "7806143b-ac57-5530-8c8a-96002c7f8385", 00:13:00.459 "is_configured": true, 00:13:00.459 "data_offset": 2048, 00:13:00.459 "data_size": 63488 00:13:00.459 }, 00:13:00.459 { 00:13:00.459 "name": "BaseBdev2", 00:13:00.459 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:13:00.459 "is_configured": true, 00:13:00.459 "data_offset": 2048, 00:13:00.459 "data_size": 63488 00:13:00.459 } 00:13:00.459 ] 00:13:00.459 }' 00:13:00.459 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.460 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:00.460 02:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.460 [2024-10-13 02:27:19.069167] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.460 "name": "raid_bdev1", 00:13:00.460 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:13:00.460 "strip_size_kb": 0, 00:13:00.460 "state": "online", 00:13:00.460 "raid_level": "raid1", 00:13:00.460 "superblock": true, 00:13:00.460 "num_base_bdevs": 2, 00:13:00.460 "num_base_bdevs_discovered": 1, 00:13:00.460 "num_base_bdevs_operational": 1, 00:13:00.460 "base_bdevs_list": [ 00:13:00.460 { 00:13:00.460 "name": null, 00:13:00.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.460 "is_configured": false, 00:13:00.460 "data_offset": 0, 00:13:00.460 "data_size": 63488 00:13:00.460 }, 00:13:00.460 { 00:13:00.460 "name": "BaseBdev2", 00:13:00.460 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:13:00.460 "is_configured": true, 00:13:00.460 "data_offset": 2048, 00:13:00.460 "data_size": 63488 00:13:00.460 } 00:13:00.460 ] 00:13:00.460 }' 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.460 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.030 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:01.030 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.030 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.030 [2024-10-13 02:27:19.516481] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.030 [2024-10-13 02:27:19.516634] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:01.030 [2024-10-13 02:27:19.516648] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:01.030 [2024-10-13 02:27:19.516683] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.030 [2024-10-13 02:27:19.521010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000277f0 00:13:01.030 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.030 02:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:01.030 [2024-10-13 02:27:19.522864] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:01.970 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.970 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.970 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.970 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.970 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.970 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.970 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.970 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.970 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.970 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.970 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.970 "name": "raid_bdev1", 00:13:01.970 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:13:01.970 "strip_size_kb": 0, 00:13:01.970 "state": "online", 00:13:01.970 "raid_level": "raid1", 00:13:01.970 "superblock": true, 00:13:01.970 "num_base_bdevs": 2, 00:13:01.970 "num_base_bdevs_discovered": 2, 00:13:01.970 "num_base_bdevs_operational": 2, 00:13:01.970 "process": { 00:13:01.970 "type": "rebuild", 00:13:01.970 "target": "spare", 00:13:01.970 "progress": { 00:13:01.970 "blocks": 20480, 00:13:01.970 "percent": 32 00:13:01.970 } 00:13:01.970 }, 00:13:01.970 "base_bdevs_list": [ 00:13:01.970 { 00:13:01.970 "name": "spare", 00:13:01.970 "uuid": "7806143b-ac57-5530-8c8a-96002c7f8385", 00:13:01.970 "is_configured": true, 00:13:01.970 "data_offset": 2048, 00:13:01.970 "data_size": 63488 00:13:01.970 }, 00:13:01.970 { 00:13:01.970 "name": "BaseBdev2", 00:13:01.970 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:13:01.970 "is_configured": true, 00:13:01.970 "data_offset": 2048, 00:13:01.970 "data_size": 63488 00:13:01.970 } 00:13:01.970 ] 00:13:01.970 }' 00:13:01.970 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.970 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.970 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.230 [2024-10-13 02:27:20.664002] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.230 [2024-10-13 02:27:20.726970] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:02.230 [2024-10-13 02:27:20.727069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.230 [2024-10-13 02:27:20.727088] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.230 [2024-10-13 02:27:20.727095] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.230 "name": "raid_bdev1", 00:13:02.230 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:13:02.230 "strip_size_kb": 0, 00:13:02.230 "state": "online", 00:13:02.230 "raid_level": "raid1", 00:13:02.230 "superblock": true, 00:13:02.230 "num_base_bdevs": 2, 00:13:02.230 "num_base_bdevs_discovered": 1, 00:13:02.230 "num_base_bdevs_operational": 1, 00:13:02.230 "base_bdevs_list": [ 00:13:02.230 { 00:13:02.230 "name": null, 00:13:02.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.230 "is_configured": false, 00:13:02.230 "data_offset": 0, 00:13:02.230 "data_size": 63488 00:13:02.230 }, 00:13:02.230 { 00:13:02.230 "name": "BaseBdev2", 00:13:02.230 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:13:02.230 "is_configured": true, 00:13:02.230 "data_offset": 2048, 00:13:02.230 "data_size": 63488 00:13:02.230 } 00:13:02.230 ] 00:13:02.230 }' 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.230 02:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.490 02:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:02.490 02:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.490 02:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.490 [2024-10-13 02:27:21.166752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:02.490 [2024-10-13 02:27:21.166905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.490 [2024-10-13 02:27:21.166949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:02.490 [2024-10-13 02:27:21.166977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.490 [2024-10-13 02:27:21.167420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.490 [2024-10-13 02:27:21.167482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:02.490 [2024-10-13 02:27:21.167600] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:02.490 [2024-10-13 02:27:21.167648] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:02.490 [2024-10-13 02:27:21.167686] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:02.490 [2024-10-13 02:27:21.167752] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:02.490 [2024-10-13 02:27:21.172024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:13:02.490 spare 00:13:02.750 02:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.750 02:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:02.750 [2024-10-13 02:27:21.173806] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:03.705 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.705 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.705 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.705 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.705 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.705 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.705 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.705 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.705 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.705 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.705 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.705 "name": "raid_bdev1", 00:13:03.705 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:13:03.705 "strip_size_kb": 0, 00:13:03.705 "state": "online", 00:13:03.705 "raid_level": "raid1", 00:13:03.705 "superblock": true, 00:13:03.705 "num_base_bdevs": 2, 00:13:03.705 "num_base_bdevs_discovered": 2, 00:13:03.705 "num_base_bdevs_operational": 2, 00:13:03.705 "process": { 00:13:03.705 "type": "rebuild", 00:13:03.705 "target": "spare", 00:13:03.705 "progress": { 00:13:03.705 "blocks": 20480, 00:13:03.705 "percent": 32 00:13:03.705 } 00:13:03.705 }, 00:13:03.705 "base_bdevs_list": [ 00:13:03.705 { 00:13:03.705 "name": "spare", 00:13:03.705 "uuid": "7806143b-ac57-5530-8c8a-96002c7f8385", 00:13:03.705 "is_configured": true, 00:13:03.705 "data_offset": 2048, 00:13:03.705 "data_size": 63488 00:13:03.705 }, 00:13:03.705 { 00:13:03.705 "name": "BaseBdev2", 00:13:03.705 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:13:03.705 "is_configured": true, 00:13:03.705 "data_offset": 2048, 00:13:03.705 "data_size": 63488 00:13:03.705 } 00:13:03.705 ] 00:13:03.706 }' 00:13:03.706 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.706 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.706 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.706 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.706 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:03.706 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.706 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.706 [2024-10-13 02:27:22.330320] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:03.706 [2024-10-13 02:27:22.377828] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:03.706 [2024-10-13 02:27:22.377943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.706 [2024-10-13 02:27:22.377973] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:03.706 [2024-10-13 02:27:22.378000] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:03.965 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.965 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:03.965 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.965 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.965 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.965 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.965 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:03.965 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.965 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.965 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.965 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.965 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.965 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.965 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.965 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.965 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.965 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.965 "name": "raid_bdev1", 00:13:03.965 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:13:03.965 "strip_size_kb": 0, 00:13:03.965 "state": "online", 00:13:03.965 "raid_level": "raid1", 00:13:03.965 "superblock": true, 00:13:03.965 "num_base_bdevs": 2, 00:13:03.965 "num_base_bdevs_discovered": 1, 00:13:03.965 "num_base_bdevs_operational": 1, 00:13:03.966 "base_bdevs_list": [ 00:13:03.966 { 00:13:03.966 "name": null, 00:13:03.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.966 "is_configured": false, 00:13:03.966 "data_offset": 0, 00:13:03.966 "data_size": 63488 00:13:03.966 }, 00:13:03.966 { 00:13:03.966 "name": "BaseBdev2", 00:13:03.966 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:13:03.966 "is_configured": true, 00:13:03.966 "data_offset": 2048, 00:13:03.966 "data_size": 63488 00:13:03.966 } 00:13:03.966 ] 00:13:03.966 }' 00:13:03.966 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.966 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.226 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:04.226 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.226 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:04.226 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:04.226 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.226 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.226 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.226 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.226 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.226 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.226 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.226 "name": "raid_bdev1", 00:13:04.226 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:13:04.226 "strip_size_kb": 0, 00:13:04.226 "state": "online", 00:13:04.226 "raid_level": "raid1", 00:13:04.226 "superblock": true, 00:13:04.226 "num_base_bdevs": 2, 00:13:04.226 "num_base_bdevs_discovered": 1, 00:13:04.226 "num_base_bdevs_operational": 1, 00:13:04.226 "base_bdevs_list": [ 00:13:04.226 { 00:13:04.226 "name": null, 00:13:04.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.226 "is_configured": false, 00:13:04.226 "data_offset": 0, 00:13:04.226 "data_size": 63488 00:13:04.226 }, 00:13:04.226 { 00:13:04.226 "name": "BaseBdev2", 00:13:04.226 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:13:04.226 "is_configured": true, 00:13:04.226 "data_offset": 2048, 00:13:04.226 "data_size": 63488 00:13:04.226 } 00:13:04.226 ] 00:13:04.226 }' 00:13:04.226 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.226 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:04.226 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.486 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:04.486 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:04.486 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.486 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.486 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.486 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:04.486 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.486 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.486 [2024-10-13 02:27:22.949355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:04.486 [2024-10-13 02:27:22.949466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.486 [2024-10-13 02:27:22.949502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:04.486 [2024-10-13 02:27:22.949534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.486 [2024-10-13 02:27:22.949986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.486 [2024-10-13 02:27:22.950047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:04.486 [2024-10-13 02:27:22.950144] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:04.486 [2024-10-13 02:27:22.950201] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:04.486 [2024-10-13 02:27:22.950240] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:04.486 [2024-10-13 02:27:22.950302] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:04.486 BaseBdev1 00:13:04.486 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.486 02:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:05.434 02:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:05.434 02:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.434 02:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.434 02:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.434 02:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.434 02:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:05.434 02:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.434 02:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.434 02:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.434 02:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.434 02:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.434 02:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.434 02:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.434 02:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.434 02:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.434 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.434 "name": "raid_bdev1", 00:13:05.434 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:13:05.434 "strip_size_kb": 0, 00:13:05.434 "state": "online", 00:13:05.434 "raid_level": "raid1", 00:13:05.434 "superblock": true, 00:13:05.434 "num_base_bdevs": 2, 00:13:05.434 "num_base_bdevs_discovered": 1, 00:13:05.434 "num_base_bdevs_operational": 1, 00:13:05.434 "base_bdevs_list": [ 00:13:05.434 { 00:13:05.434 "name": null, 00:13:05.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.434 "is_configured": false, 00:13:05.434 "data_offset": 0, 00:13:05.434 "data_size": 63488 00:13:05.434 }, 00:13:05.434 { 00:13:05.434 "name": "BaseBdev2", 00:13:05.434 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:13:05.434 "is_configured": true, 00:13:05.434 "data_offset": 2048, 00:13:05.434 "data_size": 63488 00:13:05.434 } 00:13:05.434 ] 00:13:05.434 }' 00:13:05.434 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.434 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.019 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:06.019 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.019 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:06.019 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:06.019 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.019 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.019 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.019 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.019 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.019 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.019 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.019 "name": "raid_bdev1", 00:13:06.019 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:13:06.019 "strip_size_kb": 0, 00:13:06.019 "state": "online", 00:13:06.019 "raid_level": "raid1", 00:13:06.019 "superblock": true, 00:13:06.019 "num_base_bdevs": 2, 00:13:06.019 "num_base_bdevs_discovered": 1, 00:13:06.019 "num_base_bdevs_operational": 1, 00:13:06.019 "base_bdevs_list": [ 00:13:06.019 { 00:13:06.019 "name": null, 00:13:06.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.019 "is_configured": false, 00:13:06.019 "data_offset": 0, 00:13:06.019 "data_size": 63488 00:13:06.019 }, 00:13:06.019 { 00:13:06.019 "name": "BaseBdev2", 00:13:06.020 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:13:06.020 "is_configured": true, 00:13:06.020 "data_offset": 2048, 00:13:06.020 "data_size": 63488 00:13:06.020 } 00:13:06.020 ] 00:13:06.020 }' 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.020 [2024-10-13 02:27:24.559087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.020 [2024-10-13 02:27:24.559300] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:06.020 [2024-10-13 02:27:24.559352] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:06.020 request: 00:13:06.020 { 00:13:06.020 "base_bdev": "BaseBdev1", 00:13:06.020 "raid_bdev": "raid_bdev1", 00:13:06.020 "method": "bdev_raid_add_base_bdev", 00:13:06.020 "req_id": 1 00:13:06.020 } 00:13:06.020 Got JSON-RPC error response 00:13:06.020 response: 00:13:06.020 { 00:13:06.020 "code": -22, 00:13:06.020 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:06.020 } 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:06.020 02:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:06.961 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:06.961 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.961 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.961 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.961 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.961 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:06.961 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.961 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.961 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.961 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.961 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.961 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.961 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.961 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.961 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.961 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.961 "name": "raid_bdev1", 00:13:06.961 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:13:06.961 "strip_size_kb": 0, 00:13:06.961 "state": "online", 00:13:06.961 "raid_level": "raid1", 00:13:06.961 "superblock": true, 00:13:06.961 "num_base_bdevs": 2, 00:13:06.961 "num_base_bdevs_discovered": 1, 00:13:06.961 "num_base_bdevs_operational": 1, 00:13:06.961 "base_bdevs_list": [ 00:13:06.961 { 00:13:06.961 "name": null, 00:13:06.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.961 "is_configured": false, 00:13:06.961 "data_offset": 0, 00:13:06.961 "data_size": 63488 00:13:06.961 }, 00:13:06.961 { 00:13:06.961 "name": "BaseBdev2", 00:13:06.961 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:13:06.961 "is_configured": true, 00:13:06.961 "data_offset": 2048, 00:13:06.961 "data_size": 63488 00:13:06.961 } 00:13:06.961 ] 00:13:06.961 }' 00:13:06.961 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.961 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.533 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:07.533 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.533 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:07.533 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:07.533 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.533 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.533 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.533 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.533 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.533 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.533 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.533 "name": "raid_bdev1", 00:13:07.533 "uuid": "3e8f09d8-cefa-428b-9c5b-c52978efbe34", 00:13:07.533 "strip_size_kb": 0, 00:13:07.533 "state": "online", 00:13:07.533 "raid_level": "raid1", 00:13:07.533 "superblock": true, 00:13:07.533 "num_base_bdevs": 2, 00:13:07.533 "num_base_bdevs_discovered": 1, 00:13:07.533 "num_base_bdevs_operational": 1, 00:13:07.533 "base_bdevs_list": [ 00:13:07.533 { 00:13:07.533 "name": null, 00:13:07.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.533 "is_configured": false, 00:13:07.533 "data_offset": 0, 00:13:07.533 "data_size": 63488 00:13:07.533 }, 00:13:07.533 { 00:13:07.533 "name": "BaseBdev2", 00:13:07.533 "uuid": "284e94d9-a906-577f-96ca-92a29669c13a", 00:13:07.533 "is_configured": true, 00:13:07.533 "data_offset": 2048, 00:13:07.533 "data_size": 63488 00:13:07.533 } 00:13:07.533 ] 00:13:07.533 }' 00:13:07.533 02:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.533 02:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:07.533 02:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.533 02:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:07.533 02:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87457 00:13:07.533 02:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 87457 ']' 00:13:07.533 02:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 87457 00:13:07.533 02:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:07.533 02:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:07.533 02:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87457 00:13:07.533 02:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:07.533 02:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:07.533 02:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87457' 00:13:07.533 killing process with pid 87457 00:13:07.533 02:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 87457 00:13:07.533 Received shutdown signal, test time was about 16.592981 seconds 00:13:07.533 00:13:07.533 Latency(us) 00:13:07.533 [2024-10-13T02:27:26.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.533 [2024-10-13T02:27:26.217Z] =================================================================================================================== 00:13:07.533 [2024-10-13T02:27:26.217Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:07.533 [2024-10-13 02:27:26.102681] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:07.533 [2024-10-13 02:27:26.102809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.533 [2024-10-13 02:27:26.102883] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.533 [2024-10-13 02:27:26.102893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:13:07.533 02:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 87457 00:13:07.533 [2024-10-13 02:27:26.128628] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:07.794 00:13:07.794 real 0m18.613s 00:13:07.794 user 0m24.666s 00:13:07.794 sys 0m2.293s 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.794 ************************************ 00:13:07.794 END TEST raid_rebuild_test_sb_io 00:13:07.794 ************************************ 00:13:07.794 02:27:26 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:07.794 02:27:26 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:07.794 02:27:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:07.794 02:27:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:07.794 02:27:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:07.794 ************************************ 00:13:07.794 START TEST raid_rebuild_test 00:13:07.794 ************************************ 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=88130 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 88130 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 88130 ']' 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.794 02:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:08.055 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:08.055 Zero copy mechanism will not be used. 00:13:08.055 [2024-10-13 02:27:26.518956] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:08.055 [2024-10-13 02:27:26.519058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88130 ] 00:13:08.055 [2024-10-13 02:27:26.662703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.055 [2024-10-13 02:27:26.706904] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.315 [2024-10-13 02:27:26.749140] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.315 [2024-10-13 02:27:26.749185] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.886 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:08.886 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:08.886 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:08.886 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:08.886 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.886 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.886 BaseBdev1_malloc 00:13:08.886 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.886 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:08.886 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.886 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.886 [2024-10-13 02:27:27.407771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:08.886 [2024-10-13 02:27:27.407830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.886 [2024-10-13 02:27:27.407860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:08.886 [2024-10-13 02:27:27.407885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.886 [2024-10-13 02:27:27.410005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.886 [2024-10-13 02:27:27.410043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:08.887 BaseBdev1 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.887 BaseBdev2_malloc 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.887 [2024-10-13 02:27:27.446330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:08.887 [2024-10-13 02:27:27.446440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.887 [2024-10-13 02:27:27.446470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:08.887 [2024-10-13 02:27:27.446483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.887 [2024-10-13 02:27:27.448865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.887 [2024-10-13 02:27:27.448910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:08.887 BaseBdev2 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.887 BaseBdev3_malloc 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.887 [2024-10-13 02:27:27.474900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:08.887 [2024-10-13 02:27:27.474950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.887 [2024-10-13 02:27:27.474977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:08.887 [2024-10-13 02:27:27.474987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.887 [2024-10-13 02:27:27.477040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.887 [2024-10-13 02:27:27.477070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:08.887 BaseBdev3 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.887 BaseBdev4_malloc 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.887 [2024-10-13 02:27:27.503312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:08.887 [2024-10-13 02:27:27.503352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.887 [2024-10-13 02:27:27.503373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:08.887 [2024-10-13 02:27:27.503381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.887 [2024-10-13 02:27:27.505365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.887 [2024-10-13 02:27:27.505395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:08.887 BaseBdev4 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.887 spare_malloc 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.887 spare_delay 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.887 [2024-10-13 02:27:27.539837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:08.887 [2024-10-13 02:27:27.539886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.887 [2024-10-13 02:27:27.539906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:08.887 [2024-10-13 02:27:27.539914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.887 [2024-10-13 02:27:27.541942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.887 [2024-10-13 02:27:27.541972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:08.887 spare 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.887 [2024-10-13 02:27:27.547900] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:08.887 [2024-10-13 02:27:27.549568] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:08.887 [2024-10-13 02:27:27.549628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:08.887 [2024-10-13 02:27:27.549672] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:08.887 [2024-10-13 02:27:27.549739] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:08.887 [2024-10-13 02:27:27.549749] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:08.887 [2024-10-13 02:27:27.549997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:08.887 [2024-10-13 02:27:27.550127] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:08.887 [2024-10-13 02:27:27.550145] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:08.887 [2024-10-13 02:27:27.550266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.887 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.148 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.148 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.148 "name": "raid_bdev1", 00:13:09.148 "uuid": "3022f90b-9e31-41d6-8f5e-d0ef815e0219", 00:13:09.148 "strip_size_kb": 0, 00:13:09.148 "state": "online", 00:13:09.148 "raid_level": "raid1", 00:13:09.148 "superblock": false, 00:13:09.148 "num_base_bdevs": 4, 00:13:09.148 "num_base_bdevs_discovered": 4, 00:13:09.148 "num_base_bdevs_operational": 4, 00:13:09.148 "base_bdevs_list": [ 00:13:09.148 { 00:13:09.148 "name": "BaseBdev1", 00:13:09.148 "uuid": "24d470d0-e16b-55a2-a5f9-6fbc13db4d5b", 00:13:09.148 "is_configured": true, 00:13:09.148 "data_offset": 0, 00:13:09.148 "data_size": 65536 00:13:09.148 }, 00:13:09.148 { 00:13:09.148 "name": "BaseBdev2", 00:13:09.148 "uuid": "c2cd233f-957c-52ad-9898-85e7558ae960", 00:13:09.148 "is_configured": true, 00:13:09.148 "data_offset": 0, 00:13:09.148 "data_size": 65536 00:13:09.148 }, 00:13:09.148 { 00:13:09.148 "name": "BaseBdev3", 00:13:09.148 "uuid": "28fc6fb0-82ce-51cb-8371-410c40a9685a", 00:13:09.148 "is_configured": true, 00:13:09.148 "data_offset": 0, 00:13:09.148 "data_size": 65536 00:13:09.148 }, 00:13:09.148 { 00:13:09.148 "name": "BaseBdev4", 00:13:09.148 "uuid": "515b1ee5-c019-5d7b-980c-6e05b5fe9853", 00:13:09.148 "is_configured": true, 00:13:09.148 "data_offset": 0, 00:13:09.148 "data_size": 65536 00:13:09.148 } 00:13:09.148 ] 00:13:09.148 }' 00:13:09.148 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.148 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.409 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:09.409 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.409 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.409 02:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:09.409 [2024-10-13 02:27:27.971434] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:09.409 02:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.409 02:27:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:09.668 [2024-10-13 02:27:28.242769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:13:09.668 /dev/nbd0 00:13:09.668 02:27:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:09.668 02:27:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:09.668 02:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:09.668 02:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:09.669 02:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:09.669 02:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:09.669 02:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:09.669 02:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:09.669 02:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:09.669 02:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:09.669 02:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.669 1+0 records in 00:13:09.669 1+0 records out 00:13:09.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255749 s, 16.0 MB/s 00:13:09.669 02:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.669 02:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:09.669 02:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.669 02:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:09.669 02:27:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:09.669 02:27:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.669 02:27:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.669 02:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:09.669 02:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:09.669 02:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:16.247 65536+0 records in 00:13:16.247 65536+0 records out 00:13:16.247 33554432 bytes (34 MB, 32 MiB) copied, 5.47202 s, 6.1 MB/s 00:13:16.247 02:27:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:16.247 02:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:16.247 02:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:16.247 02:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:16.247 02:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:16.247 02:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.247 02:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:16.247 [2024-10-13 02:27:33.979947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.247 02:27:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.247 [2024-10-13 02:27:34.019943] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.247 02:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.247 "name": "raid_bdev1", 00:13:16.247 "uuid": "3022f90b-9e31-41d6-8f5e-d0ef815e0219", 00:13:16.247 "strip_size_kb": 0, 00:13:16.247 "state": "online", 00:13:16.247 "raid_level": "raid1", 00:13:16.247 "superblock": false, 00:13:16.247 "num_base_bdevs": 4, 00:13:16.247 "num_base_bdevs_discovered": 3, 00:13:16.247 "num_base_bdevs_operational": 3, 00:13:16.247 "base_bdevs_list": [ 00:13:16.247 { 00:13:16.247 "name": null, 00:13:16.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.247 "is_configured": false, 00:13:16.247 "data_offset": 0, 00:13:16.247 "data_size": 65536 00:13:16.247 }, 00:13:16.247 { 00:13:16.247 "name": "BaseBdev2", 00:13:16.247 "uuid": "c2cd233f-957c-52ad-9898-85e7558ae960", 00:13:16.247 "is_configured": true, 00:13:16.247 "data_offset": 0, 00:13:16.247 "data_size": 65536 00:13:16.247 }, 00:13:16.247 { 00:13:16.247 "name": "BaseBdev3", 00:13:16.247 "uuid": "28fc6fb0-82ce-51cb-8371-410c40a9685a", 00:13:16.247 "is_configured": true, 00:13:16.247 "data_offset": 0, 00:13:16.247 "data_size": 65536 00:13:16.247 }, 00:13:16.247 { 00:13:16.247 "name": "BaseBdev4", 00:13:16.247 "uuid": "515b1ee5-c019-5d7b-980c-6e05b5fe9853", 00:13:16.247 "is_configured": true, 00:13:16.247 "data_offset": 0, 00:13:16.247 "data_size": 65536 00:13:16.247 } 00:13:16.247 ] 00:13:16.247 }' 00:13:16.248 02:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.248 02:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.248 02:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:16.248 02:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.248 02:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.248 [2024-10-13 02:27:34.463318] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:16.248 [2024-10-13 02:27:34.466736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d063c0 00:13:16.248 [2024-10-13 02:27:34.468656] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:16.248 02:27:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.248 02:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:16.818 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.818 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.818 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.818 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.818 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.818 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.818 02:27:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.818 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.818 02:27:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.078 02:27:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.078 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.078 "name": "raid_bdev1", 00:13:17.078 "uuid": "3022f90b-9e31-41d6-8f5e-d0ef815e0219", 00:13:17.078 "strip_size_kb": 0, 00:13:17.078 "state": "online", 00:13:17.078 "raid_level": "raid1", 00:13:17.078 "superblock": false, 00:13:17.078 "num_base_bdevs": 4, 00:13:17.078 "num_base_bdevs_discovered": 4, 00:13:17.078 "num_base_bdevs_operational": 4, 00:13:17.078 "process": { 00:13:17.078 "type": "rebuild", 00:13:17.078 "target": "spare", 00:13:17.078 "progress": { 00:13:17.078 "blocks": 20480, 00:13:17.078 "percent": 31 00:13:17.078 } 00:13:17.078 }, 00:13:17.078 "base_bdevs_list": [ 00:13:17.078 { 00:13:17.078 "name": "spare", 00:13:17.078 "uuid": "edf81460-4443-5752-8ab8-693e94d165fb", 00:13:17.078 "is_configured": true, 00:13:17.078 "data_offset": 0, 00:13:17.078 "data_size": 65536 00:13:17.078 }, 00:13:17.078 { 00:13:17.078 "name": "BaseBdev2", 00:13:17.078 "uuid": "c2cd233f-957c-52ad-9898-85e7558ae960", 00:13:17.078 "is_configured": true, 00:13:17.078 "data_offset": 0, 00:13:17.078 "data_size": 65536 00:13:17.078 }, 00:13:17.078 { 00:13:17.078 "name": "BaseBdev3", 00:13:17.078 "uuid": "28fc6fb0-82ce-51cb-8371-410c40a9685a", 00:13:17.078 "is_configured": true, 00:13:17.078 "data_offset": 0, 00:13:17.078 "data_size": 65536 00:13:17.078 }, 00:13:17.078 { 00:13:17.079 "name": "BaseBdev4", 00:13:17.079 "uuid": "515b1ee5-c019-5d7b-980c-6e05b5fe9853", 00:13:17.079 "is_configured": true, 00:13:17.079 "data_offset": 0, 00:13:17.079 "data_size": 65536 00:13:17.079 } 00:13:17.079 ] 00:13:17.079 }' 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.079 [2024-10-13 02:27:35.628007] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.079 [2024-10-13 02:27:35.673468] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:17.079 [2024-10-13 02:27:35.673535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.079 [2024-10-13 02:27:35.673552] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.079 [2024-10-13 02:27:35.673560] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.079 "name": "raid_bdev1", 00:13:17.079 "uuid": "3022f90b-9e31-41d6-8f5e-d0ef815e0219", 00:13:17.079 "strip_size_kb": 0, 00:13:17.079 "state": "online", 00:13:17.079 "raid_level": "raid1", 00:13:17.079 "superblock": false, 00:13:17.079 "num_base_bdevs": 4, 00:13:17.079 "num_base_bdevs_discovered": 3, 00:13:17.079 "num_base_bdevs_operational": 3, 00:13:17.079 "base_bdevs_list": [ 00:13:17.079 { 00:13:17.079 "name": null, 00:13:17.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.079 "is_configured": false, 00:13:17.079 "data_offset": 0, 00:13:17.079 "data_size": 65536 00:13:17.079 }, 00:13:17.079 { 00:13:17.079 "name": "BaseBdev2", 00:13:17.079 "uuid": "c2cd233f-957c-52ad-9898-85e7558ae960", 00:13:17.079 "is_configured": true, 00:13:17.079 "data_offset": 0, 00:13:17.079 "data_size": 65536 00:13:17.079 }, 00:13:17.079 { 00:13:17.079 "name": "BaseBdev3", 00:13:17.079 "uuid": "28fc6fb0-82ce-51cb-8371-410c40a9685a", 00:13:17.079 "is_configured": true, 00:13:17.079 "data_offset": 0, 00:13:17.079 "data_size": 65536 00:13:17.079 }, 00:13:17.079 { 00:13:17.079 "name": "BaseBdev4", 00:13:17.079 "uuid": "515b1ee5-c019-5d7b-980c-6e05b5fe9853", 00:13:17.079 "is_configured": true, 00:13:17.079 "data_offset": 0, 00:13:17.079 "data_size": 65536 00:13:17.079 } 00:13:17.079 ] 00:13:17.079 }' 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.079 02:27:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.650 02:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:17.650 02:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.650 02:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:17.650 02:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:17.650 02:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.650 02:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.650 02:27:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.650 02:27:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.650 02:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.650 02:27:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.650 02:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.650 "name": "raid_bdev1", 00:13:17.650 "uuid": "3022f90b-9e31-41d6-8f5e-d0ef815e0219", 00:13:17.650 "strip_size_kb": 0, 00:13:17.650 "state": "online", 00:13:17.650 "raid_level": "raid1", 00:13:17.650 "superblock": false, 00:13:17.650 "num_base_bdevs": 4, 00:13:17.650 "num_base_bdevs_discovered": 3, 00:13:17.650 "num_base_bdevs_operational": 3, 00:13:17.650 "base_bdevs_list": [ 00:13:17.650 { 00:13:17.650 "name": null, 00:13:17.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.650 "is_configured": false, 00:13:17.650 "data_offset": 0, 00:13:17.650 "data_size": 65536 00:13:17.650 }, 00:13:17.650 { 00:13:17.650 "name": "BaseBdev2", 00:13:17.650 "uuid": "c2cd233f-957c-52ad-9898-85e7558ae960", 00:13:17.650 "is_configured": true, 00:13:17.650 "data_offset": 0, 00:13:17.650 "data_size": 65536 00:13:17.650 }, 00:13:17.650 { 00:13:17.650 "name": "BaseBdev3", 00:13:17.650 "uuid": "28fc6fb0-82ce-51cb-8371-410c40a9685a", 00:13:17.650 "is_configured": true, 00:13:17.650 "data_offset": 0, 00:13:17.650 "data_size": 65536 00:13:17.650 }, 00:13:17.650 { 00:13:17.650 "name": "BaseBdev4", 00:13:17.650 "uuid": "515b1ee5-c019-5d7b-980c-6e05b5fe9853", 00:13:17.650 "is_configured": true, 00:13:17.650 "data_offset": 0, 00:13:17.650 "data_size": 65536 00:13:17.650 } 00:13:17.650 ] 00:13:17.650 }' 00:13:17.650 02:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.650 02:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:17.650 02:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.650 02:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:17.651 02:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:17.651 02:27:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.651 02:27:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.651 [2024-10-13 02:27:36.280416] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:17.651 [2024-10-13 02:27:36.283781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06490 00:13:17.651 [2024-10-13 02:27:36.285689] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:17.651 02:27:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.651 02:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.035 "name": "raid_bdev1", 00:13:19.035 "uuid": "3022f90b-9e31-41d6-8f5e-d0ef815e0219", 00:13:19.035 "strip_size_kb": 0, 00:13:19.035 "state": "online", 00:13:19.035 "raid_level": "raid1", 00:13:19.035 "superblock": false, 00:13:19.035 "num_base_bdevs": 4, 00:13:19.035 "num_base_bdevs_discovered": 4, 00:13:19.035 "num_base_bdevs_operational": 4, 00:13:19.035 "process": { 00:13:19.035 "type": "rebuild", 00:13:19.035 "target": "spare", 00:13:19.035 "progress": { 00:13:19.035 "blocks": 20480, 00:13:19.035 "percent": 31 00:13:19.035 } 00:13:19.035 }, 00:13:19.035 "base_bdevs_list": [ 00:13:19.035 { 00:13:19.035 "name": "spare", 00:13:19.035 "uuid": "edf81460-4443-5752-8ab8-693e94d165fb", 00:13:19.035 "is_configured": true, 00:13:19.035 "data_offset": 0, 00:13:19.035 "data_size": 65536 00:13:19.035 }, 00:13:19.035 { 00:13:19.035 "name": "BaseBdev2", 00:13:19.035 "uuid": "c2cd233f-957c-52ad-9898-85e7558ae960", 00:13:19.035 "is_configured": true, 00:13:19.035 "data_offset": 0, 00:13:19.035 "data_size": 65536 00:13:19.035 }, 00:13:19.035 { 00:13:19.035 "name": "BaseBdev3", 00:13:19.035 "uuid": "28fc6fb0-82ce-51cb-8371-410c40a9685a", 00:13:19.035 "is_configured": true, 00:13:19.035 "data_offset": 0, 00:13:19.035 "data_size": 65536 00:13:19.035 }, 00:13:19.035 { 00:13:19.035 "name": "BaseBdev4", 00:13:19.035 "uuid": "515b1ee5-c019-5d7b-980c-6e05b5fe9853", 00:13:19.035 "is_configured": true, 00:13:19.035 "data_offset": 0, 00:13:19.035 "data_size": 65536 00:13:19.035 } 00:13:19.035 ] 00:13:19.035 }' 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.035 [2024-10-13 02:27:37.428261] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:19.035 [2024-10-13 02:27:37.490040] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06490 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.035 "name": "raid_bdev1", 00:13:19.035 "uuid": "3022f90b-9e31-41d6-8f5e-d0ef815e0219", 00:13:19.035 "strip_size_kb": 0, 00:13:19.035 "state": "online", 00:13:19.035 "raid_level": "raid1", 00:13:19.035 "superblock": false, 00:13:19.035 "num_base_bdevs": 4, 00:13:19.035 "num_base_bdevs_discovered": 3, 00:13:19.035 "num_base_bdevs_operational": 3, 00:13:19.035 "process": { 00:13:19.035 "type": "rebuild", 00:13:19.035 "target": "spare", 00:13:19.035 "progress": { 00:13:19.035 "blocks": 24576, 00:13:19.035 "percent": 37 00:13:19.035 } 00:13:19.035 }, 00:13:19.035 "base_bdevs_list": [ 00:13:19.035 { 00:13:19.035 "name": "spare", 00:13:19.035 "uuid": "edf81460-4443-5752-8ab8-693e94d165fb", 00:13:19.035 "is_configured": true, 00:13:19.035 "data_offset": 0, 00:13:19.035 "data_size": 65536 00:13:19.035 }, 00:13:19.035 { 00:13:19.035 "name": null, 00:13:19.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.035 "is_configured": false, 00:13:19.035 "data_offset": 0, 00:13:19.035 "data_size": 65536 00:13:19.035 }, 00:13:19.035 { 00:13:19.035 "name": "BaseBdev3", 00:13:19.035 "uuid": "28fc6fb0-82ce-51cb-8371-410c40a9685a", 00:13:19.035 "is_configured": true, 00:13:19.035 "data_offset": 0, 00:13:19.035 "data_size": 65536 00:13:19.035 }, 00:13:19.035 { 00:13:19.035 "name": "BaseBdev4", 00:13:19.035 "uuid": "515b1ee5-c019-5d7b-980c-6e05b5fe9853", 00:13:19.035 "is_configured": true, 00:13:19.035 "data_offset": 0, 00:13:19.035 "data_size": 65536 00:13:19.035 } 00:13:19.035 ] 00:13:19.035 }' 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=365 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:19.035 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.036 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.036 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.036 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.036 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.036 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.036 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.036 02:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.036 02:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.036 02:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.036 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.036 "name": "raid_bdev1", 00:13:19.036 "uuid": "3022f90b-9e31-41d6-8f5e-d0ef815e0219", 00:13:19.036 "strip_size_kb": 0, 00:13:19.036 "state": "online", 00:13:19.036 "raid_level": "raid1", 00:13:19.036 "superblock": false, 00:13:19.036 "num_base_bdevs": 4, 00:13:19.036 "num_base_bdevs_discovered": 3, 00:13:19.036 "num_base_bdevs_operational": 3, 00:13:19.036 "process": { 00:13:19.036 "type": "rebuild", 00:13:19.036 "target": "spare", 00:13:19.036 "progress": { 00:13:19.036 "blocks": 26624, 00:13:19.036 "percent": 40 00:13:19.036 } 00:13:19.036 }, 00:13:19.036 "base_bdevs_list": [ 00:13:19.036 { 00:13:19.036 "name": "spare", 00:13:19.036 "uuid": "edf81460-4443-5752-8ab8-693e94d165fb", 00:13:19.036 "is_configured": true, 00:13:19.036 "data_offset": 0, 00:13:19.036 "data_size": 65536 00:13:19.036 }, 00:13:19.036 { 00:13:19.036 "name": null, 00:13:19.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.036 "is_configured": false, 00:13:19.036 "data_offset": 0, 00:13:19.036 "data_size": 65536 00:13:19.036 }, 00:13:19.036 { 00:13:19.036 "name": "BaseBdev3", 00:13:19.036 "uuid": "28fc6fb0-82ce-51cb-8371-410c40a9685a", 00:13:19.036 "is_configured": true, 00:13:19.036 "data_offset": 0, 00:13:19.036 "data_size": 65536 00:13:19.036 }, 00:13:19.036 { 00:13:19.036 "name": "BaseBdev4", 00:13:19.036 "uuid": "515b1ee5-c019-5d7b-980c-6e05b5fe9853", 00:13:19.036 "is_configured": true, 00:13:19.036 "data_offset": 0, 00:13:19.036 "data_size": 65536 00:13:19.036 } 00:13:19.036 ] 00:13:19.036 }' 00:13:19.036 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.296 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.296 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.296 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.296 02:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:20.236 02:27:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:20.236 02:27:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.236 02:27:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.237 02:27:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.237 02:27:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.237 02:27:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.237 02:27:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.237 02:27:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.237 02:27:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.237 02:27:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.237 02:27:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.237 02:27:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.237 "name": "raid_bdev1", 00:13:20.237 "uuid": "3022f90b-9e31-41d6-8f5e-d0ef815e0219", 00:13:20.237 "strip_size_kb": 0, 00:13:20.237 "state": "online", 00:13:20.237 "raid_level": "raid1", 00:13:20.237 "superblock": false, 00:13:20.237 "num_base_bdevs": 4, 00:13:20.237 "num_base_bdevs_discovered": 3, 00:13:20.237 "num_base_bdevs_operational": 3, 00:13:20.237 "process": { 00:13:20.237 "type": "rebuild", 00:13:20.237 "target": "spare", 00:13:20.237 "progress": { 00:13:20.237 "blocks": 51200, 00:13:20.237 "percent": 78 00:13:20.237 } 00:13:20.237 }, 00:13:20.237 "base_bdevs_list": [ 00:13:20.237 { 00:13:20.237 "name": "spare", 00:13:20.237 "uuid": "edf81460-4443-5752-8ab8-693e94d165fb", 00:13:20.237 "is_configured": true, 00:13:20.237 "data_offset": 0, 00:13:20.237 "data_size": 65536 00:13:20.237 }, 00:13:20.237 { 00:13:20.237 "name": null, 00:13:20.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.237 "is_configured": false, 00:13:20.237 "data_offset": 0, 00:13:20.237 "data_size": 65536 00:13:20.237 }, 00:13:20.237 { 00:13:20.237 "name": "BaseBdev3", 00:13:20.237 "uuid": "28fc6fb0-82ce-51cb-8371-410c40a9685a", 00:13:20.237 "is_configured": true, 00:13:20.237 "data_offset": 0, 00:13:20.237 "data_size": 65536 00:13:20.237 }, 00:13:20.237 { 00:13:20.237 "name": "BaseBdev4", 00:13:20.237 "uuid": "515b1ee5-c019-5d7b-980c-6e05b5fe9853", 00:13:20.237 "is_configured": true, 00:13:20.237 "data_offset": 0, 00:13:20.237 "data_size": 65536 00:13:20.237 } 00:13:20.237 ] 00:13:20.237 }' 00:13:20.237 02:27:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.237 02:27:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.237 02:27:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.526 02:27:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.526 02:27:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.115 [2024-10-13 02:27:39.497779] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:21.115 [2024-10-13 02:27:39.497860] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:21.115 [2024-10-13 02:27:39.497904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.374 02:27:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.375 02:27:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.375 02:27:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.375 02:27:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.375 02:27:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.375 02:27:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.375 02:27:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.375 02:27:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.375 02:27:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.375 02:27:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.375 02:27:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.375 02:27:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.375 "name": "raid_bdev1", 00:13:21.375 "uuid": "3022f90b-9e31-41d6-8f5e-d0ef815e0219", 00:13:21.375 "strip_size_kb": 0, 00:13:21.375 "state": "online", 00:13:21.375 "raid_level": "raid1", 00:13:21.375 "superblock": false, 00:13:21.375 "num_base_bdevs": 4, 00:13:21.375 "num_base_bdevs_discovered": 3, 00:13:21.375 "num_base_bdevs_operational": 3, 00:13:21.375 "base_bdevs_list": [ 00:13:21.375 { 00:13:21.375 "name": "spare", 00:13:21.375 "uuid": "edf81460-4443-5752-8ab8-693e94d165fb", 00:13:21.375 "is_configured": true, 00:13:21.375 "data_offset": 0, 00:13:21.375 "data_size": 65536 00:13:21.375 }, 00:13:21.375 { 00:13:21.375 "name": null, 00:13:21.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.375 "is_configured": false, 00:13:21.375 "data_offset": 0, 00:13:21.375 "data_size": 65536 00:13:21.375 }, 00:13:21.375 { 00:13:21.375 "name": "BaseBdev3", 00:13:21.375 "uuid": "28fc6fb0-82ce-51cb-8371-410c40a9685a", 00:13:21.375 "is_configured": true, 00:13:21.375 "data_offset": 0, 00:13:21.375 "data_size": 65536 00:13:21.375 }, 00:13:21.375 { 00:13:21.375 "name": "BaseBdev4", 00:13:21.375 "uuid": "515b1ee5-c019-5d7b-980c-6e05b5fe9853", 00:13:21.375 "is_configured": true, 00:13:21.375 "data_offset": 0, 00:13:21.375 "data_size": 65536 00:13:21.375 } 00:13:21.375 ] 00:13:21.375 }' 00:13:21.375 02:27:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.375 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:21.375 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.635 "name": "raid_bdev1", 00:13:21.635 "uuid": "3022f90b-9e31-41d6-8f5e-d0ef815e0219", 00:13:21.635 "strip_size_kb": 0, 00:13:21.635 "state": "online", 00:13:21.635 "raid_level": "raid1", 00:13:21.635 "superblock": false, 00:13:21.635 "num_base_bdevs": 4, 00:13:21.635 "num_base_bdevs_discovered": 3, 00:13:21.635 "num_base_bdevs_operational": 3, 00:13:21.635 "base_bdevs_list": [ 00:13:21.635 { 00:13:21.635 "name": "spare", 00:13:21.635 "uuid": "edf81460-4443-5752-8ab8-693e94d165fb", 00:13:21.635 "is_configured": true, 00:13:21.635 "data_offset": 0, 00:13:21.635 "data_size": 65536 00:13:21.635 }, 00:13:21.635 { 00:13:21.635 "name": null, 00:13:21.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.635 "is_configured": false, 00:13:21.635 "data_offset": 0, 00:13:21.635 "data_size": 65536 00:13:21.635 }, 00:13:21.635 { 00:13:21.635 "name": "BaseBdev3", 00:13:21.635 "uuid": "28fc6fb0-82ce-51cb-8371-410c40a9685a", 00:13:21.635 "is_configured": true, 00:13:21.635 "data_offset": 0, 00:13:21.635 "data_size": 65536 00:13:21.635 }, 00:13:21.635 { 00:13:21.635 "name": "BaseBdev4", 00:13:21.635 "uuid": "515b1ee5-c019-5d7b-980c-6e05b5fe9853", 00:13:21.635 "is_configured": true, 00:13:21.635 "data_offset": 0, 00:13:21.635 "data_size": 65536 00:13:21.635 } 00:13:21.635 ] 00:13:21.635 }' 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.635 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.635 "name": "raid_bdev1", 00:13:21.635 "uuid": "3022f90b-9e31-41d6-8f5e-d0ef815e0219", 00:13:21.635 "strip_size_kb": 0, 00:13:21.635 "state": "online", 00:13:21.635 "raid_level": "raid1", 00:13:21.636 "superblock": false, 00:13:21.636 "num_base_bdevs": 4, 00:13:21.636 "num_base_bdevs_discovered": 3, 00:13:21.636 "num_base_bdevs_operational": 3, 00:13:21.636 "base_bdevs_list": [ 00:13:21.636 { 00:13:21.636 "name": "spare", 00:13:21.636 "uuid": "edf81460-4443-5752-8ab8-693e94d165fb", 00:13:21.636 "is_configured": true, 00:13:21.636 "data_offset": 0, 00:13:21.636 "data_size": 65536 00:13:21.636 }, 00:13:21.636 { 00:13:21.636 "name": null, 00:13:21.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.636 "is_configured": false, 00:13:21.636 "data_offset": 0, 00:13:21.636 "data_size": 65536 00:13:21.636 }, 00:13:21.636 { 00:13:21.636 "name": "BaseBdev3", 00:13:21.636 "uuid": "28fc6fb0-82ce-51cb-8371-410c40a9685a", 00:13:21.636 "is_configured": true, 00:13:21.636 "data_offset": 0, 00:13:21.636 "data_size": 65536 00:13:21.636 }, 00:13:21.636 { 00:13:21.636 "name": "BaseBdev4", 00:13:21.636 "uuid": "515b1ee5-c019-5d7b-980c-6e05b5fe9853", 00:13:21.636 "is_configured": true, 00:13:21.636 "data_offset": 0, 00:13:21.636 "data_size": 65536 00:13:21.636 } 00:13:21.636 ] 00:13:21.636 }' 00:13:21.636 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.636 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.205 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:22.205 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.205 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.205 [2024-10-13 02:27:40.683703] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:22.205 [2024-10-13 02:27:40.683736] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.205 [2024-10-13 02:27:40.683835] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.205 [2024-10-13 02:27:40.683938] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.205 [2024-10-13 02:27:40.683956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:22.205 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.206 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.206 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:22.206 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.206 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.206 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.206 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:22.206 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:22.206 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:22.206 02:27:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:22.206 02:27:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.206 02:27:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:22.206 02:27:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:22.206 02:27:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:22.206 02:27:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:22.206 02:27:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:22.206 02:27:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:22.206 02:27:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:22.206 02:27:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:22.466 /dev/nbd0 00:13:22.466 02:27:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:22.466 02:27:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:22.466 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:22.466 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:22.466 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:22.466 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:22.466 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:22.466 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:22.466 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:22.466 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:22.466 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.466 1+0 records in 00:13:22.466 1+0 records out 00:13:22.466 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319075 s, 12.8 MB/s 00:13:22.466 02:27:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.466 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:22.466 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.466 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:22.466 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:22.466 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.466 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:22.466 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:22.726 /dev/nbd1 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.726 1+0 records in 00:13:22.726 1+0 records out 00:13:22.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390247 s, 10.5 MB/s 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.726 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:22.986 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:22.986 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:22.986 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:22.986 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.986 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.986 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:22.986 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:22.986 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.986 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.986 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 88130 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 88130 ']' 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 88130 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88130 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:23.246 killing process with pid 88130 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88130' 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 88130 00:13:23.246 Received shutdown signal, test time was about 60.000000 seconds 00:13:23.246 00:13:23.246 Latency(us) 00:13:23.246 [2024-10-13T02:27:41.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.246 [2024-10-13T02:27:41.930Z] =================================================================================================================== 00:13:23.246 [2024-10-13T02:27:41.930Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:23.246 [2024-10-13 02:27:41.832821] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:23.246 02:27:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 88130 00:13:23.246 [2024-10-13 02:27:41.884041] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:23.506 02:27:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:23.506 00:13:23.506 real 0m15.692s 00:13:23.506 user 0m17.405s 00:13:23.506 sys 0m3.114s 00:13:23.506 02:27:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:23.506 02:27:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.506 ************************************ 00:13:23.506 END TEST raid_rebuild_test 00:13:23.506 ************************************ 00:13:23.765 02:27:42 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:23.765 02:27:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:23.765 02:27:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:23.765 02:27:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:23.765 ************************************ 00:13:23.765 START TEST raid_rebuild_test_sb 00:13:23.765 ************************************ 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88554 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88554 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 88554 ']' 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:23.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:23.765 02:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.765 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:23.765 Zero copy mechanism will not be used. 00:13:23.766 [2024-10-13 02:27:42.313691] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:23.766 [2024-10-13 02:27:42.313821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88554 ] 00:13:24.024 [2024-10-13 02:27:42.458402] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.024 [2024-10-13 02:27:42.503015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.024 [2024-10-13 02:27:42.545203] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.024 [2024-10-13 02:27:42.545243] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.595 BaseBdev1_malloc 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.595 [2024-10-13 02:27:43.159482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:24.595 [2024-10-13 02:27:43.159568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.595 [2024-10-13 02:27:43.159607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:24.595 [2024-10-13 02:27:43.159633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.595 [2024-10-13 02:27:43.161592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.595 [2024-10-13 02:27:43.161623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:24.595 BaseBdev1 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.595 BaseBdev2_malloc 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.595 [2024-10-13 02:27:43.207958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:24.595 [2024-10-13 02:27:43.208055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.595 [2024-10-13 02:27:43.208100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:24.595 [2024-10-13 02:27:43.208122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.595 [2024-10-13 02:27:43.212235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.595 [2024-10-13 02:27:43.212283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:24.595 BaseBdev2 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.595 BaseBdev3_malloc 00:13:24.595 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.596 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:24.596 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.596 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.596 [2024-10-13 02:27:43.237608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:24.596 [2024-10-13 02:27:43.237655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.596 [2024-10-13 02:27:43.237681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:24.596 [2024-10-13 02:27:43.237690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.596 [2024-10-13 02:27:43.239709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.596 [2024-10-13 02:27:43.239739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:24.596 BaseBdev3 00:13:24.596 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.596 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:24.596 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:24.596 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.596 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.596 BaseBdev4_malloc 00:13:24.596 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.596 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:24.596 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.596 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.596 [2024-10-13 02:27:43.266110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:24.596 [2024-10-13 02:27:43.266166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.596 [2024-10-13 02:27:43.266187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:24.596 [2024-10-13 02:27:43.266196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.596 [2024-10-13 02:27:43.268162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.596 [2024-10-13 02:27:43.268193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:24.596 BaseBdev4 00:13:24.596 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.596 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:24.596 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.596 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.856 spare_malloc 00:13:24.856 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.856 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:24.856 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.856 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.856 spare_delay 00:13:24.856 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.856 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:24.856 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.856 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.856 [2024-10-13 02:27:43.306558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:24.856 [2024-10-13 02:27:43.306599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.856 [2024-10-13 02:27:43.306618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:24.856 [2024-10-13 02:27:43.306626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.856 [2024-10-13 02:27:43.308739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.856 [2024-10-13 02:27:43.308775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:24.856 spare 00:13:24.856 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.856 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:24.856 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.856 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.856 [2024-10-13 02:27:43.318608] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:24.856 [2024-10-13 02:27:43.320434] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.856 [2024-10-13 02:27:43.320502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:24.856 [2024-10-13 02:27:43.320552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:24.857 [2024-10-13 02:27:43.320714] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:24.857 [2024-10-13 02:27:43.320737] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:24.857 [2024-10-13 02:27:43.321020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:24.857 [2024-10-13 02:27:43.321172] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:24.857 [2024-10-13 02:27:43.321197] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:24.857 [2024-10-13 02:27:43.321307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.857 "name": "raid_bdev1", 00:13:24.857 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:24.857 "strip_size_kb": 0, 00:13:24.857 "state": "online", 00:13:24.857 "raid_level": "raid1", 00:13:24.857 "superblock": true, 00:13:24.857 "num_base_bdevs": 4, 00:13:24.857 "num_base_bdevs_discovered": 4, 00:13:24.857 "num_base_bdevs_operational": 4, 00:13:24.857 "base_bdevs_list": [ 00:13:24.857 { 00:13:24.857 "name": "BaseBdev1", 00:13:24.857 "uuid": "c2e17ca9-b057-5c66-80de-cee0abd7a418", 00:13:24.857 "is_configured": true, 00:13:24.857 "data_offset": 2048, 00:13:24.857 "data_size": 63488 00:13:24.857 }, 00:13:24.857 { 00:13:24.857 "name": "BaseBdev2", 00:13:24.857 "uuid": "84929c3f-8913-542e-b2b5-2a5572ba8421", 00:13:24.857 "is_configured": true, 00:13:24.857 "data_offset": 2048, 00:13:24.857 "data_size": 63488 00:13:24.857 }, 00:13:24.857 { 00:13:24.857 "name": "BaseBdev3", 00:13:24.857 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:24.857 "is_configured": true, 00:13:24.857 "data_offset": 2048, 00:13:24.857 "data_size": 63488 00:13:24.857 }, 00:13:24.857 { 00:13:24.857 "name": "BaseBdev4", 00:13:24.857 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:24.857 "is_configured": true, 00:13:24.857 "data_offset": 2048, 00:13:24.857 "data_size": 63488 00:13:24.857 } 00:13:24.857 ] 00:13:24.857 }' 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.857 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.117 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:25.117 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.117 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:25.377 [2024-10-13 02:27:43.802120] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:25.377 02:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:25.637 [2024-10-13 02:27:44.089375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:13:25.637 /dev/nbd0 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.637 1+0 records in 00:13:25.637 1+0 records out 00:13:25.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312254 s, 13.1 MB/s 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:25.637 02:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:30.916 63488+0 records in 00:13:30.916 63488+0 records out 00:13:30.916 32505856 bytes (33 MB, 31 MiB) copied, 4.96812 s, 6.5 MB/s 00:13:30.916 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:30.916 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.916 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:30.916 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.916 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:30.917 [2024-10-13 02:27:49.345955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.917 [2024-10-13 02:27:49.373982] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.917 "name": "raid_bdev1", 00:13:30.917 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:30.917 "strip_size_kb": 0, 00:13:30.917 "state": "online", 00:13:30.917 "raid_level": "raid1", 00:13:30.917 "superblock": true, 00:13:30.917 "num_base_bdevs": 4, 00:13:30.917 "num_base_bdevs_discovered": 3, 00:13:30.917 "num_base_bdevs_operational": 3, 00:13:30.917 "base_bdevs_list": [ 00:13:30.917 { 00:13:30.917 "name": null, 00:13:30.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.917 "is_configured": false, 00:13:30.917 "data_offset": 0, 00:13:30.917 "data_size": 63488 00:13:30.917 }, 00:13:30.917 { 00:13:30.917 "name": "BaseBdev2", 00:13:30.917 "uuid": "84929c3f-8913-542e-b2b5-2a5572ba8421", 00:13:30.917 "is_configured": true, 00:13:30.917 "data_offset": 2048, 00:13:30.917 "data_size": 63488 00:13:30.917 }, 00:13:30.917 { 00:13:30.917 "name": "BaseBdev3", 00:13:30.917 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:30.917 "is_configured": true, 00:13:30.917 "data_offset": 2048, 00:13:30.917 "data_size": 63488 00:13:30.917 }, 00:13:30.917 { 00:13:30.917 "name": "BaseBdev4", 00:13:30.917 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:30.917 "is_configured": true, 00:13:30.917 "data_offset": 2048, 00:13:30.917 "data_size": 63488 00:13:30.917 } 00:13:30.917 ] 00:13:30.917 }' 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.917 02:27:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.492 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:31.492 02:27:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.492 02:27:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.492 [2024-10-13 02:27:49.885138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:31.492 [2024-10-13 02:27:49.888478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:13:31.492 [2024-10-13 02:27:49.890284] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:31.492 02:27:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.492 02:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:32.440 02:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.440 02:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.440 02:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.440 02:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.440 02:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.440 02:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.440 02:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.440 02:27:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.440 02:27:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.440 02:27:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.440 02:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.440 "name": "raid_bdev1", 00:13:32.440 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:32.440 "strip_size_kb": 0, 00:13:32.440 "state": "online", 00:13:32.440 "raid_level": "raid1", 00:13:32.440 "superblock": true, 00:13:32.440 "num_base_bdevs": 4, 00:13:32.440 "num_base_bdevs_discovered": 4, 00:13:32.440 "num_base_bdevs_operational": 4, 00:13:32.440 "process": { 00:13:32.440 "type": "rebuild", 00:13:32.440 "target": "spare", 00:13:32.440 "progress": { 00:13:32.440 "blocks": 20480, 00:13:32.440 "percent": 32 00:13:32.440 } 00:13:32.440 }, 00:13:32.440 "base_bdevs_list": [ 00:13:32.440 { 00:13:32.440 "name": "spare", 00:13:32.440 "uuid": "4f73083f-8ca3-592c-b84f-7d7ecb09cb3b", 00:13:32.440 "is_configured": true, 00:13:32.440 "data_offset": 2048, 00:13:32.440 "data_size": 63488 00:13:32.440 }, 00:13:32.440 { 00:13:32.440 "name": "BaseBdev2", 00:13:32.440 "uuid": "84929c3f-8913-542e-b2b5-2a5572ba8421", 00:13:32.440 "is_configured": true, 00:13:32.440 "data_offset": 2048, 00:13:32.440 "data_size": 63488 00:13:32.440 }, 00:13:32.440 { 00:13:32.440 "name": "BaseBdev3", 00:13:32.440 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:32.440 "is_configured": true, 00:13:32.441 "data_offset": 2048, 00:13:32.441 "data_size": 63488 00:13:32.441 }, 00:13:32.441 { 00:13:32.441 "name": "BaseBdev4", 00:13:32.441 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:32.441 "is_configured": true, 00:13:32.441 "data_offset": 2048, 00:13:32.441 "data_size": 63488 00:13:32.441 } 00:13:32.441 ] 00:13:32.441 }' 00:13:32.441 02:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.441 02:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.441 [2024-10-13 02:27:51.036975] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:32.441 [2024-10-13 02:27:51.094882] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:32.441 [2024-10-13 02:27:51.094956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.441 [2024-10-13 02:27:51.094975] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:32.441 [2024-10-13 02:27:51.094984] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.441 02:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.701 02:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.701 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.701 "name": "raid_bdev1", 00:13:32.701 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:32.701 "strip_size_kb": 0, 00:13:32.701 "state": "online", 00:13:32.701 "raid_level": "raid1", 00:13:32.701 "superblock": true, 00:13:32.701 "num_base_bdevs": 4, 00:13:32.701 "num_base_bdevs_discovered": 3, 00:13:32.701 "num_base_bdevs_operational": 3, 00:13:32.701 "base_bdevs_list": [ 00:13:32.701 { 00:13:32.701 "name": null, 00:13:32.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.701 "is_configured": false, 00:13:32.701 "data_offset": 0, 00:13:32.701 "data_size": 63488 00:13:32.701 }, 00:13:32.701 { 00:13:32.701 "name": "BaseBdev2", 00:13:32.701 "uuid": "84929c3f-8913-542e-b2b5-2a5572ba8421", 00:13:32.701 "is_configured": true, 00:13:32.701 "data_offset": 2048, 00:13:32.701 "data_size": 63488 00:13:32.701 }, 00:13:32.701 { 00:13:32.701 "name": "BaseBdev3", 00:13:32.701 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:32.701 "is_configured": true, 00:13:32.701 "data_offset": 2048, 00:13:32.701 "data_size": 63488 00:13:32.701 }, 00:13:32.701 { 00:13:32.701 "name": "BaseBdev4", 00:13:32.701 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:32.701 "is_configured": true, 00:13:32.701 "data_offset": 2048, 00:13:32.701 "data_size": 63488 00:13:32.701 } 00:13:32.701 ] 00:13:32.701 }' 00:13:32.701 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.701 02:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.960 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:32.960 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.960 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:32.960 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:32.960 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.960 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.960 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.960 02:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.960 02:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.960 02:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.960 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.960 "name": "raid_bdev1", 00:13:32.960 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:32.960 "strip_size_kb": 0, 00:13:32.960 "state": "online", 00:13:32.960 "raid_level": "raid1", 00:13:32.960 "superblock": true, 00:13:32.960 "num_base_bdevs": 4, 00:13:32.960 "num_base_bdevs_discovered": 3, 00:13:32.960 "num_base_bdevs_operational": 3, 00:13:32.960 "base_bdevs_list": [ 00:13:32.960 { 00:13:32.960 "name": null, 00:13:32.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.960 "is_configured": false, 00:13:32.960 "data_offset": 0, 00:13:32.960 "data_size": 63488 00:13:32.960 }, 00:13:32.960 { 00:13:32.960 "name": "BaseBdev2", 00:13:32.960 "uuid": "84929c3f-8913-542e-b2b5-2a5572ba8421", 00:13:32.960 "is_configured": true, 00:13:32.960 "data_offset": 2048, 00:13:32.960 "data_size": 63488 00:13:32.960 }, 00:13:32.960 { 00:13:32.960 "name": "BaseBdev3", 00:13:32.960 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:32.960 "is_configured": true, 00:13:32.960 "data_offset": 2048, 00:13:32.960 "data_size": 63488 00:13:32.960 }, 00:13:32.960 { 00:13:32.960 "name": "BaseBdev4", 00:13:32.960 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:32.960 "is_configured": true, 00:13:32.960 "data_offset": 2048, 00:13:32.960 "data_size": 63488 00:13:32.960 } 00:13:32.960 ] 00:13:32.960 }' 00:13:32.960 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.220 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:33.220 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.220 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:33.220 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:33.220 02:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.220 02:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.220 [2024-10-13 02:27:51.706004] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:33.220 [2024-10-13 02:27:51.709289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e4f0 00:13:33.220 [2024-10-13 02:27:51.711249] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:33.220 02:27:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.220 02:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:34.160 02:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.160 02:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.160 02:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.160 02:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.160 02:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.160 02:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.160 02:27:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.160 02:27:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.160 02:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.160 02:27:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.160 02:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.160 "name": "raid_bdev1", 00:13:34.160 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:34.160 "strip_size_kb": 0, 00:13:34.160 "state": "online", 00:13:34.160 "raid_level": "raid1", 00:13:34.160 "superblock": true, 00:13:34.160 "num_base_bdevs": 4, 00:13:34.160 "num_base_bdevs_discovered": 4, 00:13:34.160 "num_base_bdevs_operational": 4, 00:13:34.160 "process": { 00:13:34.160 "type": "rebuild", 00:13:34.160 "target": "spare", 00:13:34.160 "progress": { 00:13:34.160 "blocks": 20480, 00:13:34.160 "percent": 32 00:13:34.160 } 00:13:34.160 }, 00:13:34.160 "base_bdevs_list": [ 00:13:34.160 { 00:13:34.160 "name": "spare", 00:13:34.160 "uuid": "4f73083f-8ca3-592c-b84f-7d7ecb09cb3b", 00:13:34.160 "is_configured": true, 00:13:34.160 "data_offset": 2048, 00:13:34.160 "data_size": 63488 00:13:34.160 }, 00:13:34.160 { 00:13:34.160 "name": "BaseBdev2", 00:13:34.160 "uuid": "84929c3f-8913-542e-b2b5-2a5572ba8421", 00:13:34.160 "is_configured": true, 00:13:34.160 "data_offset": 2048, 00:13:34.160 "data_size": 63488 00:13:34.160 }, 00:13:34.160 { 00:13:34.160 "name": "BaseBdev3", 00:13:34.160 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:34.160 "is_configured": true, 00:13:34.160 "data_offset": 2048, 00:13:34.160 "data_size": 63488 00:13:34.160 }, 00:13:34.160 { 00:13:34.160 "name": "BaseBdev4", 00:13:34.160 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:34.161 "is_configured": true, 00:13:34.161 "data_offset": 2048, 00:13:34.161 "data_size": 63488 00:13:34.161 } 00:13:34.161 ] 00:13:34.161 }' 00:13:34.161 02:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.161 02:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.161 02:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.421 02:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.421 02:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:34.421 02:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:34.421 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:34.421 02:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:34.421 02:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:34.421 02:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:34.421 02:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:34.421 02:27:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.421 02:27:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.421 [2024-10-13 02:27:52.878019] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:34.421 [2024-10-13 02:27:53.015338] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e4f0 00:13:34.421 02:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.421 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:34.421 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:34.421 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.421 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.421 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.421 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.421 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.421 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.421 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.421 02:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.421 02:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.421 02:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.421 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.421 "name": "raid_bdev1", 00:13:34.421 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:34.421 "strip_size_kb": 0, 00:13:34.421 "state": "online", 00:13:34.421 "raid_level": "raid1", 00:13:34.421 "superblock": true, 00:13:34.421 "num_base_bdevs": 4, 00:13:34.421 "num_base_bdevs_discovered": 3, 00:13:34.421 "num_base_bdevs_operational": 3, 00:13:34.421 "process": { 00:13:34.421 "type": "rebuild", 00:13:34.421 "target": "spare", 00:13:34.421 "progress": { 00:13:34.421 "blocks": 24576, 00:13:34.421 "percent": 38 00:13:34.421 } 00:13:34.421 }, 00:13:34.421 "base_bdevs_list": [ 00:13:34.421 { 00:13:34.421 "name": "spare", 00:13:34.421 "uuid": "4f73083f-8ca3-592c-b84f-7d7ecb09cb3b", 00:13:34.421 "is_configured": true, 00:13:34.421 "data_offset": 2048, 00:13:34.421 "data_size": 63488 00:13:34.421 }, 00:13:34.421 { 00:13:34.421 "name": null, 00:13:34.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.421 "is_configured": false, 00:13:34.421 "data_offset": 0, 00:13:34.421 "data_size": 63488 00:13:34.421 }, 00:13:34.421 { 00:13:34.421 "name": "BaseBdev3", 00:13:34.421 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:34.421 "is_configured": true, 00:13:34.421 "data_offset": 2048, 00:13:34.421 "data_size": 63488 00:13:34.421 }, 00:13:34.421 { 00:13:34.421 "name": "BaseBdev4", 00:13:34.421 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:34.421 "is_configured": true, 00:13:34.421 "data_offset": 2048, 00:13:34.421 "data_size": 63488 00:13:34.421 } 00:13:34.421 ] 00:13:34.421 }' 00:13:34.421 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.681 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.681 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.681 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.681 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=381 00:13:34.681 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:34.681 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.681 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.681 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.681 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.681 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.681 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.681 02:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.681 02:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.681 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.681 02:27:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.681 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.681 "name": "raid_bdev1", 00:13:34.681 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:34.681 "strip_size_kb": 0, 00:13:34.681 "state": "online", 00:13:34.681 "raid_level": "raid1", 00:13:34.681 "superblock": true, 00:13:34.681 "num_base_bdevs": 4, 00:13:34.681 "num_base_bdevs_discovered": 3, 00:13:34.681 "num_base_bdevs_operational": 3, 00:13:34.681 "process": { 00:13:34.681 "type": "rebuild", 00:13:34.681 "target": "spare", 00:13:34.681 "progress": { 00:13:34.681 "blocks": 26624, 00:13:34.681 "percent": 41 00:13:34.681 } 00:13:34.681 }, 00:13:34.681 "base_bdevs_list": [ 00:13:34.681 { 00:13:34.681 "name": "spare", 00:13:34.681 "uuid": "4f73083f-8ca3-592c-b84f-7d7ecb09cb3b", 00:13:34.681 "is_configured": true, 00:13:34.681 "data_offset": 2048, 00:13:34.681 "data_size": 63488 00:13:34.681 }, 00:13:34.681 { 00:13:34.681 "name": null, 00:13:34.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.681 "is_configured": false, 00:13:34.681 "data_offset": 0, 00:13:34.681 "data_size": 63488 00:13:34.681 }, 00:13:34.681 { 00:13:34.681 "name": "BaseBdev3", 00:13:34.681 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:34.681 "is_configured": true, 00:13:34.681 "data_offset": 2048, 00:13:34.681 "data_size": 63488 00:13:34.681 }, 00:13:34.681 { 00:13:34.681 "name": "BaseBdev4", 00:13:34.681 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:34.681 "is_configured": true, 00:13:34.681 "data_offset": 2048, 00:13:34.681 "data_size": 63488 00:13:34.681 } 00:13:34.681 ] 00:13:34.681 }' 00:13:34.681 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.681 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.682 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.682 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.682 02:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:35.622 02:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:35.881 02:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.881 02:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.881 02:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.881 02:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.881 02:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.881 02:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.881 02:27:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.881 02:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.882 02:27:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.882 02:27:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.882 02:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.882 "name": "raid_bdev1", 00:13:35.882 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:35.882 "strip_size_kb": 0, 00:13:35.882 "state": "online", 00:13:35.882 "raid_level": "raid1", 00:13:35.882 "superblock": true, 00:13:35.882 "num_base_bdevs": 4, 00:13:35.882 "num_base_bdevs_discovered": 3, 00:13:35.882 "num_base_bdevs_operational": 3, 00:13:35.882 "process": { 00:13:35.882 "type": "rebuild", 00:13:35.882 "target": "spare", 00:13:35.882 "progress": { 00:13:35.882 "blocks": 49152, 00:13:35.882 "percent": 77 00:13:35.882 } 00:13:35.882 }, 00:13:35.882 "base_bdevs_list": [ 00:13:35.882 { 00:13:35.882 "name": "spare", 00:13:35.882 "uuid": "4f73083f-8ca3-592c-b84f-7d7ecb09cb3b", 00:13:35.882 "is_configured": true, 00:13:35.882 "data_offset": 2048, 00:13:35.882 "data_size": 63488 00:13:35.882 }, 00:13:35.882 { 00:13:35.882 "name": null, 00:13:35.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.882 "is_configured": false, 00:13:35.882 "data_offset": 0, 00:13:35.882 "data_size": 63488 00:13:35.882 }, 00:13:35.882 { 00:13:35.882 "name": "BaseBdev3", 00:13:35.882 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:35.882 "is_configured": true, 00:13:35.882 "data_offset": 2048, 00:13:35.882 "data_size": 63488 00:13:35.882 }, 00:13:35.882 { 00:13:35.882 "name": "BaseBdev4", 00:13:35.882 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:35.882 "is_configured": true, 00:13:35.882 "data_offset": 2048, 00:13:35.882 "data_size": 63488 00:13:35.882 } 00:13:35.882 ] 00:13:35.882 }' 00:13:35.882 02:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.882 02:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.882 02:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.882 02:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.882 02:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:36.452 [2024-10-13 02:27:54.922082] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:36.452 [2024-10-13 02:27:54.922190] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:36.452 [2024-10-13 02:27:54.922318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.021 "name": "raid_bdev1", 00:13:37.021 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:37.021 "strip_size_kb": 0, 00:13:37.021 "state": "online", 00:13:37.021 "raid_level": "raid1", 00:13:37.021 "superblock": true, 00:13:37.021 "num_base_bdevs": 4, 00:13:37.021 "num_base_bdevs_discovered": 3, 00:13:37.021 "num_base_bdevs_operational": 3, 00:13:37.021 "base_bdevs_list": [ 00:13:37.021 { 00:13:37.021 "name": "spare", 00:13:37.021 "uuid": "4f73083f-8ca3-592c-b84f-7d7ecb09cb3b", 00:13:37.021 "is_configured": true, 00:13:37.021 "data_offset": 2048, 00:13:37.021 "data_size": 63488 00:13:37.021 }, 00:13:37.021 { 00:13:37.021 "name": null, 00:13:37.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.021 "is_configured": false, 00:13:37.021 "data_offset": 0, 00:13:37.021 "data_size": 63488 00:13:37.021 }, 00:13:37.021 { 00:13:37.021 "name": "BaseBdev3", 00:13:37.021 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:37.021 "is_configured": true, 00:13:37.021 "data_offset": 2048, 00:13:37.021 "data_size": 63488 00:13:37.021 }, 00:13:37.021 { 00:13:37.021 "name": "BaseBdev4", 00:13:37.021 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:37.021 "is_configured": true, 00:13:37.021 "data_offset": 2048, 00:13:37.021 "data_size": 63488 00:13:37.021 } 00:13:37.021 ] 00:13:37.021 }' 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.021 "name": "raid_bdev1", 00:13:37.021 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:37.021 "strip_size_kb": 0, 00:13:37.021 "state": "online", 00:13:37.021 "raid_level": "raid1", 00:13:37.021 "superblock": true, 00:13:37.021 "num_base_bdevs": 4, 00:13:37.021 "num_base_bdevs_discovered": 3, 00:13:37.021 "num_base_bdevs_operational": 3, 00:13:37.021 "base_bdevs_list": [ 00:13:37.021 { 00:13:37.021 "name": "spare", 00:13:37.021 "uuid": "4f73083f-8ca3-592c-b84f-7d7ecb09cb3b", 00:13:37.021 "is_configured": true, 00:13:37.021 "data_offset": 2048, 00:13:37.021 "data_size": 63488 00:13:37.021 }, 00:13:37.021 { 00:13:37.021 "name": null, 00:13:37.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.021 "is_configured": false, 00:13:37.021 "data_offset": 0, 00:13:37.021 "data_size": 63488 00:13:37.021 }, 00:13:37.021 { 00:13:37.021 "name": "BaseBdev3", 00:13:37.021 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:37.021 "is_configured": true, 00:13:37.021 "data_offset": 2048, 00:13:37.021 "data_size": 63488 00:13:37.021 }, 00:13:37.021 { 00:13:37.021 "name": "BaseBdev4", 00:13:37.021 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:37.021 "is_configured": true, 00:13:37.021 "data_offset": 2048, 00:13:37.021 "data_size": 63488 00:13:37.021 } 00:13:37.021 ] 00:13:37.021 }' 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.021 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.280 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.280 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:37.280 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.280 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.280 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.280 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.280 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.280 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.280 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.280 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.280 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.280 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.280 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.280 02:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.280 02:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.281 02:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.281 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.281 "name": "raid_bdev1", 00:13:37.281 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:37.281 "strip_size_kb": 0, 00:13:37.281 "state": "online", 00:13:37.281 "raid_level": "raid1", 00:13:37.281 "superblock": true, 00:13:37.281 "num_base_bdevs": 4, 00:13:37.281 "num_base_bdevs_discovered": 3, 00:13:37.281 "num_base_bdevs_operational": 3, 00:13:37.281 "base_bdevs_list": [ 00:13:37.281 { 00:13:37.281 "name": "spare", 00:13:37.281 "uuid": "4f73083f-8ca3-592c-b84f-7d7ecb09cb3b", 00:13:37.281 "is_configured": true, 00:13:37.281 "data_offset": 2048, 00:13:37.281 "data_size": 63488 00:13:37.281 }, 00:13:37.281 { 00:13:37.281 "name": null, 00:13:37.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.281 "is_configured": false, 00:13:37.281 "data_offset": 0, 00:13:37.281 "data_size": 63488 00:13:37.281 }, 00:13:37.281 { 00:13:37.281 "name": "BaseBdev3", 00:13:37.281 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:37.281 "is_configured": true, 00:13:37.281 "data_offset": 2048, 00:13:37.281 "data_size": 63488 00:13:37.281 }, 00:13:37.281 { 00:13:37.281 "name": "BaseBdev4", 00:13:37.281 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:37.281 "is_configured": true, 00:13:37.281 "data_offset": 2048, 00:13:37.281 "data_size": 63488 00:13:37.281 } 00:13:37.281 ] 00:13:37.281 }' 00:13:37.281 02:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.281 02:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.540 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:37.540 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.540 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.540 [2024-10-13 02:27:56.216211] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.540 [2024-10-13 02:27:56.216303] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.540 [2024-10-13 02:27:56.216412] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.540 [2024-10-13 02:27:56.216502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.540 [2024-10-13 02:27:56.216563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:37.540 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.800 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.800 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.800 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:37.800 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.800 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.800 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:37.800 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:37.800 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:37.800 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:37.800 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.800 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:37.800 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:37.800 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:37.800 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:37.800 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:37.800 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:37.800 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:37.800 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:37.800 /dev/nbd0 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.060 1+0 records in 00:13:38.060 1+0 records out 00:13:38.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050676 s, 8.1 MB/s 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:38.060 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:38.060 /dev/nbd1 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.320 1+0 records in 00:13:38.320 1+0 records out 00:13:38.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401995 s, 10.2 MB/s 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.320 02:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:38.579 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:38.579 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:38.579 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:38.579 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.579 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.579 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:38.579 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:38.579 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.579 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.579 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.839 [2024-10-13 02:27:57.321079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:38.839 [2024-10-13 02:27:57.321204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.839 [2024-10-13 02:27:57.321229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:38.839 [2024-10-13 02:27:57.321243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.839 [2024-10-13 02:27:57.323384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.839 [2024-10-13 02:27:57.323472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:38.839 [2024-10-13 02:27:57.323565] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:38.839 [2024-10-13 02:27:57.323606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.839 [2024-10-13 02:27:57.323738] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:38.839 [2024-10-13 02:27:57.323822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:38.839 spare 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.839 [2024-10-13 02:27:57.423728] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:13:38.839 [2024-10-13 02:27:57.423760] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:38.839 [2024-10-13 02:27:57.424075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:13:38.839 [2024-10-13 02:27:57.424221] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:13:38.839 [2024-10-13 02:27:57.424245] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:13:38.839 [2024-10-13 02:27:57.424379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.839 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.840 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.840 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.840 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.840 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.840 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.840 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.840 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.840 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.840 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.840 02:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.840 02:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.840 02:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.840 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.840 "name": "raid_bdev1", 00:13:38.840 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:38.840 "strip_size_kb": 0, 00:13:38.840 "state": "online", 00:13:38.840 "raid_level": "raid1", 00:13:38.840 "superblock": true, 00:13:38.840 "num_base_bdevs": 4, 00:13:38.840 "num_base_bdevs_discovered": 3, 00:13:38.840 "num_base_bdevs_operational": 3, 00:13:38.840 "base_bdevs_list": [ 00:13:38.840 { 00:13:38.840 "name": "spare", 00:13:38.840 "uuid": "4f73083f-8ca3-592c-b84f-7d7ecb09cb3b", 00:13:38.840 "is_configured": true, 00:13:38.840 "data_offset": 2048, 00:13:38.840 "data_size": 63488 00:13:38.840 }, 00:13:38.840 { 00:13:38.840 "name": null, 00:13:38.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.840 "is_configured": false, 00:13:38.840 "data_offset": 2048, 00:13:38.840 "data_size": 63488 00:13:38.840 }, 00:13:38.840 { 00:13:38.840 "name": "BaseBdev3", 00:13:38.840 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:38.840 "is_configured": true, 00:13:38.840 "data_offset": 2048, 00:13:38.840 "data_size": 63488 00:13:38.840 }, 00:13:38.840 { 00:13:38.840 "name": "BaseBdev4", 00:13:38.840 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:38.840 "is_configured": true, 00:13:38.840 "data_offset": 2048, 00:13:38.840 "data_size": 63488 00:13:38.840 } 00:13:38.840 ] 00:13:38.840 }' 00:13:38.840 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.840 02:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.408 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.408 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.408 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.408 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.408 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.408 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.408 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.409 02:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.409 02:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.409 02:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.409 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.409 "name": "raid_bdev1", 00:13:39.409 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:39.409 "strip_size_kb": 0, 00:13:39.409 "state": "online", 00:13:39.409 "raid_level": "raid1", 00:13:39.409 "superblock": true, 00:13:39.409 "num_base_bdevs": 4, 00:13:39.409 "num_base_bdevs_discovered": 3, 00:13:39.409 "num_base_bdevs_operational": 3, 00:13:39.409 "base_bdevs_list": [ 00:13:39.409 { 00:13:39.409 "name": "spare", 00:13:39.409 "uuid": "4f73083f-8ca3-592c-b84f-7d7ecb09cb3b", 00:13:39.409 "is_configured": true, 00:13:39.409 "data_offset": 2048, 00:13:39.409 "data_size": 63488 00:13:39.409 }, 00:13:39.409 { 00:13:39.409 "name": null, 00:13:39.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.409 "is_configured": false, 00:13:39.409 "data_offset": 2048, 00:13:39.409 "data_size": 63488 00:13:39.409 }, 00:13:39.409 { 00:13:39.409 "name": "BaseBdev3", 00:13:39.409 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:39.409 "is_configured": true, 00:13:39.409 "data_offset": 2048, 00:13:39.409 "data_size": 63488 00:13:39.409 }, 00:13:39.409 { 00:13:39.409 "name": "BaseBdev4", 00:13:39.409 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:39.409 "is_configured": true, 00:13:39.409 "data_offset": 2048, 00:13:39.409 "data_size": 63488 00:13:39.409 } 00:13:39.409 ] 00:13:39.409 }' 00:13:39.409 02:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.409 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.409 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.409 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.409 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.409 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:39.409 02:27:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.409 02:27:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.409 02:27:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.669 [2024-10-13 02:27:58.107821] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.669 "name": "raid_bdev1", 00:13:39.669 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:39.669 "strip_size_kb": 0, 00:13:39.669 "state": "online", 00:13:39.669 "raid_level": "raid1", 00:13:39.669 "superblock": true, 00:13:39.669 "num_base_bdevs": 4, 00:13:39.669 "num_base_bdevs_discovered": 2, 00:13:39.669 "num_base_bdevs_operational": 2, 00:13:39.669 "base_bdevs_list": [ 00:13:39.669 { 00:13:39.669 "name": null, 00:13:39.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.669 "is_configured": false, 00:13:39.669 "data_offset": 0, 00:13:39.669 "data_size": 63488 00:13:39.669 }, 00:13:39.669 { 00:13:39.669 "name": null, 00:13:39.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.669 "is_configured": false, 00:13:39.669 "data_offset": 2048, 00:13:39.669 "data_size": 63488 00:13:39.669 }, 00:13:39.669 { 00:13:39.669 "name": "BaseBdev3", 00:13:39.669 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:39.669 "is_configured": true, 00:13:39.669 "data_offset": 2048, 00:13:39.669 "data_size": 63488 00:13:39.669 }, 00:13:39.669 { 00:13:39.669 "name": "BaseBdev4", 00:13:39.669 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:39.669 "is_configured": true, 00:13:39.669 "data_offset": 2048, 00:13:39.669 "data_size": 63488 00:13:39.669 } 00:13:39.669 ] 00:13:39.669 }' 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.669 02:27:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.929 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:39.929 02:27:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.929 02:27:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.929 [2024-10-13 02:27:58.587275] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:39.929 [2024-10-13 02:27:58.587518] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:39.929 [2024-10-13 02:27:58.587587] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:39.929 [2024-10-13 02:27:58.587664] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:39.929 [2024-10-13 02:27:58.590903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caebd0 00:13:39.929 [2024-10-13 02:27:58.592782] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:39.929 02:27:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.929 02:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.311 "name": "raid_bdev1", 00:13:41.311 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:41.311 "strip_size_kb": 0, 00:13:41.311 "state": "online", 00:13:41.311 "raid_level": "raid1", 00:13:41.311 "superblock": true, 00:13:41.311 "num_base_bdevs": 4, 00:13:41.311 "num_base_bdevs_discovered": 3, 00:13:41.311 "num_base_bdevs_operational": 3, 00:13:41.311 "process": { 00:13:41.311 "type": "rebuild", 00:13:41.311 "target": "spare", 00:13:41.311 "progress": { 00:13:41.311 "blocks": 20480, 00:13:41.311 "percent": 32 00:13:41.311 } 00:13:41.311 }, 00:13:41.311 "base_bdevs_list": [ 00:13:41.311 { 00:13:41.311 "name": "spare", 00:13:41.311 "uuid": "4f73083f-8ca3-592c-b84f-7d7ecb09cb3b", 00:13:41.311 "is_configured": true, 00:13:41.311 "data_offset": 2048, 00:13:41.311 "data_size": 63488 00:13:41.311 }, 00:13:41.311 { 00:13:41.311 "name": null, 00:13:41.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.311 "is_configured": false, 00:13:41.311 "data_offset": 2048, 00:13:41.311 "data_size": 63488 00:13:41.311 }, 00:13:41.311 { 00:13:41.311 "name": "BaseBdev3", 00:13:41.311 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:41.311 "is_configured": true, 00:13:41.311 "data_offset": 2048, 00:13:41.311 "data_size": 63488 00:13:41.311 }, 00:13:41.311 { 00:13:41.311 "name": "BaseBdev4", 00:13:41.311 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:41.311 "is_configured": true, 00:13:41.311 "data_offset": 2048, 00:13:41.311 "data_size": 63488 00:13:41.311 } 00:13:41.311 ] 00:13:41.311 }' 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.311 [2024-10-13 02:27:59.724036] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.311 [2024-10-13 02:27:59.796706] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:41.311 [2024-10-13 02:27:59.796764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.311 [2024-10-13 02:27:59.796778] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.311 [2024-10-13 02:27:59.796787] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.311 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.312 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.312 02:27:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.312 02:27:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.312 02:27:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.312 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.312 "name": "raid_bdev1", 00:13:41.312 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:41.312 "strip_size_kb": 0, 00:13:41.312 "state": "online", 00:13:41.312 "raid_level": "raid1", 00:13:41.312 "superblock": true, 00:13:41.312 "num_base_bdevs": 4, 00:13:41.312 "num_base_bdevs_discovered": 2, 00:13:41.312 "num_base_bdevs_operational": 2, 00:13:41.312 "base_bdevs_list": [ 00:13:41.312 { 00:13:41.312 "name": null, 00:13:41.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.312 "is_configured": false, 00:13:41.312 "data_offset": 0, 00:13:41.312 "data_size": 63488 00:13:41.312 }, 00:13:41.312 { 00:13:41.312 "name": null, 00:13:41.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.312 "is_configured": false, 00:13:41.312 "data_offset": 2048, 00:13:41.312 "data_size": 63488 00:13:41.312 }, 00:13:41.312 { 00:13:41.312 "name": "BaseBdev3", 00:13:41.312 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:41.312 "is_configured": true, 00:13:41.312 "data_offset": 2048, 00:13:41.312 "data_size": 63488 00:13:41.312 }, 00:13:41.312 { 00:13:41.312 "name": "BaseBdev4", 00:13:41.312 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:41.312 "is_configured": true, 00:13:41.312 "data_offset": 2048, 00:13:41.312 "data_size": 63488 00:13:41.312 } 00:13:41.312 ] 00:13:41.312 }' 00:13:41.312 02:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.312 02:27:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.882 02:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:41.882 02:28:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.882 02:28:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.882 [2024-10-13 02:28:00.275693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:41.882 [2024-10-13 02:28:00.275794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.882 [2024-10-13 02:28:00.275837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:13:41.882 [2024-10-13 02:28:00.275884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.882 [2024-10-13 02:28:00.276332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.882 [2024-10-13 02:28:00.276393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:41.882 [2024-10-13 02:28:00.276498] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:41.882 [2024-10-13 02:28:00.276543] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:41.882 [2024-10-13 02:28:00.276585] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:41.882 [2024-10-13 02:28:00.276652] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:41.882 [2024-10-13 02:28:00.279472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:13:41.882 [2024-10-13 02:28:00.281351] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:41.882 spare 00:13:41.882 02:28:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.882 02:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:42.821 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.821 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.821 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.821 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.821 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.821 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.821 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.821 02:28:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.821 02:28:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.822 "name": "raid_bdev1", 00:13:42.822 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:42.822 "strip_size_kb": 0, 00:13:42.822 "state": "online", 00:13:42.822 "raid_level": "raid1", 00:13:42.822 "superblock": true, 00:13:42.822 "num_base_bdevs": 4, 00:13:42.822 "num_base_bdevs_discovered": 3, 00:13:42.822 "num_base_bdevs_operational": 3, 00:13:42.822 "process": { 00:13:42.822 "type": "rebuild", 00:13:42.822 "target": "spare", 00:13:42.822 "progress": { 00:13:42.822 "blocks": 20480, 00:13:42.822 "percent": 32 00:13:42.822 } 00:13:42.822 }, 00:13:42.822 "base_bdevs_list": [ 00:13:42.822 { 00:13:42.822 "name": "spare", 00:13:42.822 "uuid": "4f73083f-8ca3-592c-b84f-7d7ecb09cb3b", 00:13:42.822 "is_configured": true, 00:13:42.822 "data_offset": 2048, 00:13:42.822 "data_size": 63488 00:13:42.822 }, 00:13:42.822 { 00:13:42.822 "name": null, 00:13:42.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.822 "is_configured": false, 00:13:42.822 "data_offset": 2048, 00:13:42.822 "data_size": 63488 00:13:42.822 }, 00:13:42.822 { 00:13:42.822 "name": "BaseBdev3", 00:13:42.822 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:42.822 "is_configured": true, 00:13:42.822 "data_offset": 2048, 00:13:42.822 "data_size": 63488 00:13:42.822 }, 00:13:42.822 { 00:13:42.822 "name": "BaseBdev4", 00:13:42.822 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:42.822 "is_configured": true, 00:13:42.822 "data_offset": 2048, 00:13:42.822 "data_size": 63488 00:13:42.822 } 00:13:42.822 ] 00:13:42.822 }' 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.822 [2024-10-13 02:28:01.444056] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:42.822 [2024-10-13 02:28:01.485260] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:42.822 [2024-10-13 02:28:01.485357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.822 [2024-10-13 02:28:01.485376] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:42.822 [2024-10-13 02:28:01.485383] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.822 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.082 02:28:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.082 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.082 "name": "raid_bdev1", 00:13:43.082 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:43.082 "strip_size_kb": 0, 00:13:43.082 "state": "online", 00:13:43.082 "raid_level": "raid1", 00:13:43.082 "superblock": true, 00:13:43.082 "num_base_bdevs": 4, 00:13:43.082 "num_base_bdevs_discovered": 2, 00:13:43.082 "num_base_bdevs_operational": 2, 00:13:43.082 "base_bdevs_list": [ 00:13:43.082 { 00:13:43.082 "name": null, 00:13:43.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.082 "is_configured": false, 00:13:43.082 "data_offset": 0, 00:13:43.082 "data_size": 63488 00:13:43.082 }, 00:13:43.082 { 00:13:43.082 "name": null, 00:13:43.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.082 "is_configured": false, 00:13:43.082 "data_offset": 2048, 00:13:43.082 "data_size": 63488 00:13:43.082 }, 00:13:43.082 { 00:13:43.082 "name": "BaseBdev3", 00:13:43.082 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:43.082 "is_configured": true, 00:13:43.082 "data_offset": 2048, 00:13:43.082 "data_size": 63488 00:13:43.082 }, 00:13:43.082 { 00:13:43.082 "name": "BaseBdev4", 00:13:43.082 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:43.082 "is_configured": true, 00:13:43.082 "data_offset": 2048, 00:13:43.082 "data_size": 63488 00:13:43.082 } 00:13:43.082 ] 00:13:43.082 }' 00:13:43.082 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.082 02:28:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.341 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.341 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.341 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.341 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.341 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.341 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.341 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.341 02:28:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.341 02:28:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.341 02:28:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.341 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.341 "name": "raid_bdev1", 00:13:43.341 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:43.341 "strip_size_kb": 0, 00:13:43.341 "state": "online", 00:13:43.341 "raid_level": "raid1", 00:13:43.341 "superblock": true, 00:13:43.341 "num_base_bdevs": 4, 00:13:43.341 "num_base_bdevs_discovered": 2, 00:13:43.341 "num_base_bdevs_operational": 2, 00:13:43.341 "base_bdevs_list": [ 00:13:43.341 { 00:13:43.341 "name": null, 00:13:43.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.341 "is_configured": false, 00:13:43.341 "data_offset": 0, 00:13:43.341 "data_size": 63488 00:13:43.341 }, 00:13:43.341 { 00:13:43.341 "name": null, 00:13:43.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.341 "is_configured": false, 00:13:43.341 "data_offset": 2048, 00:13:43.341 "data_size": 63488 00:13:43.341 }, 00:13:43.341 { 00:13:43.341 "name": "BaseBdev3", 00:13:43.341 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:43.341 "is_configured": true, 00:13:43.341 "data_offset": 2048, 00:13:43.341 "data_size": 63488 00:13:43.341 }, 00:13:43.341 { 00:13:43.341 "name": "BaseBdev4", 00:13:43.341 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:43.341 "is_configured": true, 00:13:43.341 "data_offset": 2048, 00:13:43.341 "data_size": 63488 00:13:43.341 } 00:13:43.341 ] 00:13:43.341 }' 00:13:43.341 02:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.341 02:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.341 02:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.601 02:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.601 02:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:43.601 02:28:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.601 02:28:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.601 02:28:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.601 02:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:43.601 02:28:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.601 02:28:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.601 [2024-10-13 02:28:02.075865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:43.601 [2024-10-13 02:28:02.075976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.601 [2024-10-13 02:28:02.076005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:43.601 [2024-10-13 02:28:02.076015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.601 [2024-10-13 02:28:02.076418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.601 [2024-10-13 02:28:02.076434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:43.601 [2024-10-13 02:28:02.076505] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:43.601 [2024-10-13 02:28:02.076518] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:43.601 [2024-10-13 02:28:02.076529] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:43.601 [2024-10-13 02:28:02.076538] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:43.601 BaseBdev1 00:13:43.601 02:28:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.601 02:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:44.540 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:44.540 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.540 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.540 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.540 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.540 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:44.540 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.540 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.540 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.540 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.540 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.540 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.540 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.540 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.540 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.540 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.540 "name": "raid_bdev1", 00:13:44.540 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:44.540 "strip_size_kb": 0, 00:13:44.540 "state": "online", 00:13:44.540 "raid_level": "raid1", 00:13:44.540 "superblock": true, 00:13:44.541 "num_base_bdevs": 4, 00:13:44.541 "num_base_bdevs_discovered": 2, 00:13:44.541 "num_base_bdevs_operational": 2, 00:13:44.541 "base_bdevs_list": [ 00:13:44.541 { 00:13:44.541 "name": null, 00:13:44.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.541 "is_configured": false, 00:13:44.541 "data_offset": 0, 00:13:44.541 "data_size": 63488 00:13:44.541 }, 00:13:44.541 { 00:13:44.541 "name": null, 00:13:44.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.541 "is_configured": false, 00:13:44.541 "data_offset": 2048, 00:13:44.541 "data_size": 63488 00:13:44.541 }, 00:13:44.541 { 00:13:44.541 "name": "BaseBdev3", 00:13:44.541 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:44.541 "is_configured": true, 00:13:44.541 "data_offset": 2048, 00:13:44.541 "data_size": 63488 00:13:44.541 }, 00:13:44.541 { 00:13:44.541 "name": "BaseBdev4", 00:13:44.541 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:44.541 "is_configured": true, 00:13:44.541 "data_offset": 2048, 00:13:44.541 "data_size": 63488 00:13:44.541 } 00:13:44.541 ] 00:13:44.541 }' 00:13:44.541 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.541 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.110 "name": "raid_bdev1", 00:13:45.110 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:45.110 "strip_size_kb": 0, 00:13:45.110 "state": "online", 00:13:45.110 "raid_level": "raid1", 00:13:45.110 "superblock": true, 00:13:45.110 "num_base_bdevs": 4, 00:13:45.110 "num_base_bdevs_discovered": 2, 00:13:45.110 "num_base_bdevs_operational": 2, 00:13:45.110 "base_bdevs_list": [ 00:13:45.110 { 00:13:45.110 "name": null, 00:13:45.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.110 "is_configured": false, 00:13:45.110 "data_offset": 0, 00:13:45.110 "data_size": 63488 00:13:45.110 }, 00:13:45.110 { 00:13:45.110 "name": null, 00:13:45.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.110 "is_configured": false, 00:13:45.110 "data_offset": 2048, 00:13:45.110 "data_size": 63488 00:13:45.110 }, 00:13:45.110 { 00:13:45.110 "name": "BaseBdev3", 00:13:45.110 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:45.110 "is_configured": true, 00:13:45.110 "data_offset": 2048, 00:13:45.110 "data_size": 63488 00:13:45.110 }, 00:13:45.110 { 00:13:45.110 "name": "BaseBdev4", 00:13:45.110 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:45.110 "is_configured": true, 00:13:45.110 "data_offset": 2048, 00:13:45.110 "data_size": 63488 00:13:45.110 } 00:13:45.110 ] 00:13:45.110 }' 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.110 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.110 [2024-10-13 02:28:03.689077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:45.110 [2024-10-13 02:28:03.689228] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:45.110 [2024-10-13 02:28:03.689243] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:45.110 request: 00:13:45.110 { 00:13:45.111 "base_bdev": "BaseBdev1", 00:13:45.111 "raid_bdev": "raid_bdev1", 00:13:45.111 "method": "bdev_raid_add_base_bdev", 00:13:45.111 "req_id": 1 00:13:45.111 } 00:13:45.111 Got JSON-RPC error response 00:13:45.111 response: 00:13:45.111 { 00:13:45.111 "code": -22, 00:13:45.111 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:45.111 } 00:13:45.111 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:45.111 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:45.111 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:45.111 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:45.111 02:28:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:45.111 02:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:46.056 02:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:46.056 02:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.056 02:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.056 02:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.056 02:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.056 02:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.056 02:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.056 02:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.056 02:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.056 02:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.056 02:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.056 02:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.056 02:28:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.056 02:28:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.056 02:28:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.330 02:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.330 "name": "raid_bdev1", 00:13:46.330 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:46.330 "strip_size_kb": 0, 00:13:46.330 "state": "online", 00:13:46.330 "raid_level": "raid1", 00:13:46.330 "superblock": true, 00:13:46.330 "num_base_bdevs": 4, 00:13:46.330 "num_base_bdevs_discovered": 2, 00:13:46.330 "num_base_bdevs_operational": 2, 00:13:46.330 "base_bdevs_list": [ 00:13:46.330 { 00:13:46.330 "name": null, 00:13:46.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.330 "is_configured": false, 00:13:46.330 "data_offset": 0, 00:13:46.330 "data_size": 63488 00:13:46.330 }, 00:13:46.330 { 00:13:46.330 "name": null, 00:13:46.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.330 "is_configured": false, 00:13:46.330 "data_offset": 2048, 00:13:46.330 "data_size": 63488 00:13:46.330 }, 00:13:46.330 { 00:13:46.330 "name": "BaseBdev3", 00:13:46.330 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:46.330 "is_configured": true, 00:13:46.330 "data_offset": 2048, 00:13:46.330 "data_size": 63488 00:13:46.330 }, 00:13:46.330 { 00:13:46.330 "name": "BaseBdev4", 00:13:46.330 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:46.330 "is_configured": true, 00:13:46.330 "data_offset": 2048, 00:13:46.330 "data_size": 63488 00:13:46.330 } 00:13:46.330 ] 00:13:46.330 }' 00:13:46.330 02:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.330 02:28:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.600 "name": "raid_bdev1", 00:13:46.600 "uuid": "e89af6a5-e509-4606-b666-4c4cebf1ae8c", 00:13:46.600 "strip_size_kb": 0, 00:13:46.600 "state": "online", 00:13:46.600 "raid_level": "raid1", 00:13:46.600 "superblock": true, 00:13:46.600 "num_base_bdevs": 4, 00:13:46.600 "num_base_bdevs_discovered": 2, 00:13:46.600 "num_base_bdevs_operational": 2, 00:13:46.600 "base_bdevs_list": [ 00:13:46.600 { 00:13:46.600 "name": null, 00:13:46.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.600 "is_configured": false, 00:13:46.600 "data_offset": 0, 00:13:46.600 "data_size": 63488 00:13:46.600 }, 00:13:46.600 { 00:13:46.600 "name": null, 00:13:46.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.600 "is_configured": false, 00:13:46.600 "data_offset": 2048, 00:13:46.600 "data_size": 63488 00:13:46.600 }, 00:13:46.600 { 00:13:46.600 "name": "BaseBdev3", 00:13:46.600 "uuid": "b9f9e384-fedf-5aad-9372-0737397cfbd3", 00:13:46.600 "is_configured": true, 00:13:46.600 "data_offset": 2048, 00:13:46.600 "data_size": 63488 00:13:46.600 }, 00:13:46.600 { 00:13:46.600 "name": "BaseBdev4", 00:13:46.600 "uuid": "8d0a8d47-5749-5f1a-b1bf-54c9fdb1b2ce", 00:13:46.600 "is_configured": true, 00:13:46.600 "data_offset": 2048, 00:13:46.600 "data_size": 63488 00:13:46.600 } 00:13:46.600 ] 00:13:46.600 }' 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88554 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 88554 ']' 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 88554 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:46.600 02:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88554 00:13:46.860 killing process with pid 88554 00:13:46.860 Received shutdown signal, test time was about 60.000000 seconds 00:13:46.860 00:13:46.860 Latency(us) 00:13:46.860 [2024-10-13T02:28:05.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.860 [2024-10-13T02:28:05.544Z] =================================================================================================================== 00:13:46.860 [2024-10-13T02:28:05.544Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:46.860 02:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:46.860 02:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:46.860 02:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88554' 00:13:46.860 02:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 88554 00:13:46.860 [2024-10-13 02:28:05.292717] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:46.860 [2024-10-13 02:28:05.292842] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.860 02:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 88554 00:13:46.860 [2024-10-13 02:28:05.292913] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.860 [2024-10-13 02:28:05.292924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:13:46.860 [2024-10-13 02:28:05.341893] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:47.120 00:13:47.120 real 0m23.372s 00:13:47.120 user 0m28.816s 00:13:47.120 sys 0m3.861s 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:47.120 ************************************ 00:13:47.120 END TEST raid_rebuild_test_sb 00:13:47.120 ************************************ 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.120 02:28:05 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:47.120 02:28:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:47.120 02:28:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:47.120 02:28:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:47.120 ************************************ 00:13:47.120 START TEST raid_rebuild_test_io 00:13:47.120 ************************************ 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89290 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89290 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 89290 ']' 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:47.120 02:28:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.120 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:47.120 Zero copy mechanism will not be used. 00:13:47.120 [2024-10-13 02:28:05.755546] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:47.120 [2024-10-13 02:28:05.755693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89290 ] 00:13:47.380 [2024-10-13 02:28:05.899866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.380 [2024-10-13 02:28:05.946311] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.380 [2024-10-13 02:28:05.989302] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.380 [2024-10-13 02:28:05.989337] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.949 BaseBdev1_malloc 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.949 [2024-10-13 02:28:06.587222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:47.949 [2024-10-13 02:28:06.587280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.949 [2024-10-13 02:28:06.587303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:47.949 [2024-10-13 02:28:06.587316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.949 [2024-10-13 02:28:06.589383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.949 [2024-10-13 02:28:06.589418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:47.949 BaseBdev1 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.949 BaseBdev2_malloc 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.949 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.209 [2024-10-13 02:28:06.632708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:48.209 [2024-10-13 02:28:06.632807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.209 [2024-10-13 02:28:06.632853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:48.210 [2024-10-13 02:28:06.632917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.210 [2024-10-13 02:28:06.637703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.210 [2024-10-13 02:28:06.637774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:48.210 BaseBdev2 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.210 BaseBdev3_malloc 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.210 [2024-10-13 02:28:06.663321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:48.210 [2024-10-13 02:28:06.663370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.210 [2024-10-13 02:28:06.663396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:48.210 [2024-10-13 02:28:06.663404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.210 [2024-10-13 02:28:06.665336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.210 [2024-10-13 02:28:06.665422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:48.210 BaseBdev3 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.210 BaseBdev4_malloc 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.210 [2024-10-13 02:28:06.691556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:48.210 [2024-10-13 02:28:06.691600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.210 [2024-10-13 02:28:06.691646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:48.210 [2024-10-13 02:28:06.691654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.210 [2024-10-13 02:28:06.693674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.210 [2024-10-13 02:28:06.693708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:48.210 BaseBdev4 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.210 spare_malloc 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.210 spare_delay 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.210 [2024-10-13 02:28:06.731901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:48.210 [2024-10-13 02:28:06.731943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.210 [2024-10-13 02:28:06.731961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:48.210 [2024-10-13 02:28:06.731970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.210 [2024-10-13 02:28:06.733937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.210 [2024-10-13 02:28:06.733969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:48.210 spare 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.210 [2024-10-13 02:28:06.743956] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.210 [2024-10-13 02:28:06.745768] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:48.210 [2024-10-13 02:28:06.745831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:48.210 [2024-10-13 02:28:06.745885] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:48.210 [2024-10-13 02:28:06.745955] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:48.210 [2024-10-13 02:28:06.745968] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:48.210 [2024-10-13 02:28:06.746203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:48.210 [2024-10-13 02:28:06.746345] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:48.210 [2024-10-13 02:28:06.746365] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:48.210 [2024-10-13 02:28:06.746475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.210 "name": "raid_bdev1", 00:13:48.210 "uuid": "a8fc65e0-7e60-46fa-b54f-07ab11bb3c2c", 00:13:48.210 "strip_size_kb": 0, 00:13:48.210 "state": "online", 00:13:48.210 "raid_level": "raid1", 00:13:48.210 "superblock": false, 00:13:48.210 "num_base_bdevs": 4, 00:13:48.210 "num_base_bdevs_discovered": 4, 00:13:48.210 "num_base_bdevs_operational": 4, 00:13:48.210 "base_bdevs_list": [ 00:13:48.210 { 00:13:48.210 "name": "BaseBdev1", 00:13:48.210 "uuid": "14eac1fb-84df-52aa-bb7c-ecdc40a61976", 00:13:48.210 "is_configured": true, 00:13:48.210 "data_offset": 0, 00:13:48.210 "data_size": 65536 00:13:48.210 }, 00:13:48.210 { 00:13:48.210 "name": "BaseBdev2", 00:13:48.210 "uuid": "4d354b2e-d0bd-5a81-83e7-6439aa9116f3", 00:13:48.210 "is_configured": true, 00:13:48.210 "data_offset": 0, 00:13:48.210 "data_size": 65536 00:13:48.210 }, 00:13:48.210 { 00:13:48.210 "name": "BaseBdev3", 00:13:48.210 "uuid": "b73c98eb-7e12-5eac-a64b-da3a3fb3f231", 00:13:48.210 "is_configured": true, 00:13:48.210 "data_offset": 0, 00:13:48.210 "data_size": 65536 00:13:48.210 }, 00:13:48.210 { 00:13:48.210 "name": "BaseBdev4", 00:13:48.210 "uuid": "015636c2-6fab-55ed-87ee-f12df8110294", 00:13:48.210 "is_configured": true, 00:13:48.210 "data_offset": 0, 00:13:48.210 "data_size": 65536 00:13:48.210 } 00:13:48.210 ] 00:13:48.210 }' 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.210 02:28:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.781 [2024-10-13 02:28:07.203479] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.781 [2024-10-13 02:28:07.275048] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.781 "name": "raid_bdev1", 00:13:48.781 "uuid": "a8fc65e0-7e60-46fa-b54f-07ab11bb3c2c", 00:13:48.781 "strip_size_kb": 0, 00:13:48.781 "state": "online", 00:13:48.781 "raid_level": "raid1", 00:13:48.781 "superblock": false, 00:13:48.781 "num_base_bdevs": 4, 00:13:48.781 "num_base_bdevs_discovered": 3, 00:13:48.781 "num_base_bdevs_operational": 3, 00:13:48.781 "base_bdevs_list": [ 00:13:48.781 { 00:13:48.781 "name": null, 00:13:48.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.781 "is_configured": false, 00:13:48.781 "data_offset": 0, 00:13:48.781 "data_size": 65536 00:13:48.781 }, 00:13:48.781 { 00:13:48.781 "name": "BaseBdev2", 00:13:48.781 "uuid": "4d354b2e-d0bd-5a81-83e7-6439aa9116f3", 00:13:48.781 "is_configured": true, 00:13:48.781 "data_offset": 0, 00:13:48.781 "data_size": 65536 00:13:48.781 }, 00:13:48.781 { 00:13:48.781 "name": "BaseBdev3", 00:13:48.781 "uuid": "b73c98eb-7e12-5eac-a64b-da3a3fb3f231", 00:13:48.781 "is_configured": true, 00:13:48.781 "data_offset": 0, 00:13:48.781 "data_size": 65536 00:13:48.781 }, 00:13:48.781 { 00:13:48.781 "name": "BaseBdev4", 00:13:48.781 "uuid": "015636c2-6fab-55ed-87ee-f12df8110294", 00:13:48.781 "is_configured": true, 00:13:48.781 "data_offset": 0, 00:13:48.781 "data_size": 65536 00:13:48.781 } 00:13:48.781 ] 00:13:48.781 }' 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.781 02:28:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.781 [2024-10-13 02:28:07.364875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:13:48.781 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:48.781 Zero copy mechanism will not be used. 00:13:48.781 Running I/O for 60 seconds... 00:13:49.040 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:49.040 02:28:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.040 02:28:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.040 [2024-10-13 02:28:07.709201] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:49.300 02:28:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.300 02:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:49.300 [2024-10-13 02:28:07.777737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:13:49.300 [2024-10-13 02:28:07.779696] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:49.300 [2024-10-13 02:28:07.898620] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:49.300 [2024-10-13 02:28:07.899825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:49.558 [2024-10-13 02:28:08.133285] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:49.558 [2024-10-13 02:28:08.133884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:49.818 150.00 IOPS, 450.00 MiB/s [2024-10-13T02:28:08.502Z] [2024-10-13 02:28:08.487660] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:50.078 [2024-10-13 02:28:08.620978] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:50.078 02:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.078 02:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.078 02:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.078 02:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.078 02:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.078 02:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.078 02:28:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.078 02:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.078 02:28:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.338 02:28:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.338 02:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.338 "name": "raid_bdev1", 00:13:50.338 "uuid": "a8fc65e0-7e60-46fa-b54f-07ab11bb3c2c", 00:13:50.338 "strip_size_kb": 0, 00:13:50.338 "state": "online", 00:13:50.338 "raid_level": "raid1", 00:13:50.338 "superblock": false, 00:13:50.338 "num_base_bdevs": 4, 00:13:50.338 "num_base_bdevs_discovered": 4, 00:13:50.338 "num_base_bdevs_operational": 4, 00:13:50.338 "process": { 00:13:50.338 "type": "rebuild", 00:13:50.338 "target": "spare", 00:13:50.338 "progress": { 00:13:50.338 "blocks": 10240, 00:13:50.338 "percent": 15 00:13:50.338 } 00:13:50.338 }, 00:13:50.338 "base_bdevs_list": [ 00:13:50.338 { 00:13:50.338 "name": "spare", 00:13:50.338 "uuid": "a59bb75a-97c0-5956-8888-ed3cbe9fd335", 00:13:50.338 "is_configured": true, 00:13:50.338 "data_offset": 0, 00:13:50.338 "data_size": 65536 00:13:50.338 }, 00:13:50.338 { 00:13:50.338 "name": "BaseBdev2", 00:13:50.338 "uuid": "4d354b2e-d0bd-5a81-83e7-6439aa9116f3", 00:13:50.338 "is_configured": true, 00:13:50.338 "data_offset": 0, 00:13:50.338 "data_size": 65536 00:13:50.338 }, 00:13:50.338 { 00:13:50.338 "name": "BaseBdev3", 00:13:50.338 "uuid": "b73c98eb-7e12-5eac-a64b-da3a3fb3f231", 00:13:50.338 "is_configured": true, 00:13:50.338 "data_offset": 0, 00:13:50.338 "data_size": 65536 00:13:50.338 }, 00:13:50.338 { 00:13:50.338 "name": "BaseBdev4", 00:13:50.338 "uuid": "015636c2-6fab-55ed-87ee-f12df8110294", 00:13:50.338 "is_configured": true, 00:13:50.338 "data_offset": 0, 00:13:50.338 "data_size": 65536 00:13:50.338 } 00:13:50.338 ] 00:13:50.338 }' 00:13:50.338 02:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.338 02:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.338 02:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.338 02:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.338 02:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:50.338 02:28:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.338 02:28:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.338 [2024-10-13 02:28:08.890176] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:50.338 [2024-10-13 02:28:08.958561] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:50.338 [2024-10-13 02:28:08.970104] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:50.338 [2024-10-13 02:28:08.980369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.338 [2024-10-13 02:28:08.980409] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:50.338 [2024-10-13 02:28:08.980426] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:50.338 [2024-10-13 02:28:08.991142] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:13:50.338 02:28:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.338 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:50.339 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.339 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.339 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.339 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.339 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.339 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.339 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.339 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.339 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.339 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.339 02:28:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.339 02:28:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.339 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.598 02:28:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.598 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.598 "name": "raid_bdev1", 00:13:50.598 "uuid": "a8fc65e0-7e60-46fa-b54f-07ab11bb3c2c", 00:13:50.598 "strip_size_kb": 0, 00:13:50.598 "state": "online", 00:13:50.598 "raid_level": "raid1", 00:13:50.598 "superblock": false, 00:13:50.598 "num_base_bdevs": 4, 00:13:50.598 "num_base_bdevs_discovered": 3, 00:13:50.598 "num_base_bdevs_operational": 3, 00:13:50.598 "base_bdevs_list": [ 00:13:50.598 { 00:13:50.598 "name": null, 00:13:50.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.598 "is_configured": false, 00:13:50.598 "data_offset": 0, 00:13:50.598 "data_size": 65536 00:13:50.598 }, 00:13:50.598 { 00:13:50.598 "name": "BaseBdev2", 00:13:50.598 "uuid": "4d354b2e-d0bd-5a81-83e7-6439aa9116f3", 00:13:50.598 "is_configured": true, 00:13:50.598 "data_offset": 0, 00:13:50.598 "data_size": 65536 00:13:50.598 }, 00:13:50.598 { 00:13:50.598 "name": "BaseBdev3", 00:13:50.598 "uuid": "b73c98eb-7e12-5eac-a64b-da3a3fb3f231", 00:13:50.598 "is_configured": true, 00:13:50.598 "data_offset": 0, 00:13:50.598 "data_size": 65536 00:13:50.598 }, 00:13:50.598 { 00:13:50.598 "name": "BaseBdev4", 00:13:50.598 "uuid": "015636c2-6fab-55ed-87ee-f12df8110294", 00:13:50.598 "is_configured": true, 00:13:50.598 "data_offset": 0, 00:13:50.598 "data_size": 65536 00:13:50.598 } 00:13:50.598 ] 00:13:50.598 }' 00:13:50.598 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.598 02:28:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.858 159.00 IOPS, 477.00 MiB/s [2024-10-13T02:28:09.542Z] 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:50.858 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.858 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:50.858 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:50.858 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.858 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.858 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.858 02:28:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.858 02:28:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.858 02:28:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.858 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.858 "name": "raid_bdev1", 00:13:50.858 "uuid": "a8fc65e0-7e60-46fa-b54f-07ab11bb3c2c", 00:13:50.858 "strip_size_kb": 0, 00:13:50.858 "state": "online", 00:13:50.858 "raid_level": "raid1", 00:13:50.858 "superblock": false, 00:13:50.858 "num_base_bdevs": 4, 00:13:50.858 "num_base_bdevs_discovered": 3, 00:13:50.858 "num_base_bdevs_operational": 3, 00:13:50.858 "base_bdevs_list": [ 00:13:50.858 { 00:13:50.858 "name": null, 00:13:50.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.858 "is_configured": false, 00:13:50.858 "data_offset": 0, 00:13:50.858 "data_size": 65536 00:13:50.858 }, 00:13:50.858 { 00:13:50.858 "name": "BaseBdev2", 00:13:50.858 "uuid": "4d354b2e-d0bd-5a81-83e7-6439aa9116f3", 00:13:50.858 "is_configured": true, 00:13:50.858 "data_offset": 0, 00:13:50.858 "data_size": 65536 00:13:50.858 }, 00:13:50.858 { 00:13:50.858 "name": "BaseBdev3", 00:13:50.858 "uuid": "b73c98eb-7e12-5eac-a64b-da3a3fb3f231", 00:13:50.858 "is_configured": true, 00:13:50.858 "data_offset": 0, 00:13:50.858 "data_size": 65536 00:13:50.858 }, 00:13:50.858 { 00:13:50.858 "name": "BaseBdev4", 00:13:50.858 "uuid": "015636c2-6fab-55ed-87ee-f12df8110294", 00:13:50.858 "is_configured": true, 00:13:50.858 "data_offset": 0, 00:13:50.858 "data_size": 65536 00:13:50.858 } 00:13:50.858 ] 00:13:50.858 }' 00:13:50.858 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.858 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:50.858 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.119 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.119 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:51.119 02:28:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.119 02:28:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.119 [2024-10-13 02:28:09.583771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.119 02:28:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.119 02:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:51.119 [2024-10-13 02:28:09.643469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:13:51.119 [2024-10-13 02:28:09.645497] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:51.119 [2024-10-13 02:28:09.746812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:51.119 [2024-10-13 02:28:09.747434] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:51.379 [2024-10-13 02:28:09.890235] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:51.379 [2024-10-13 02:28:09.890655] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:51.638 [2024-10-13 02:28:10.229679] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:51.898 163.33 IOPS, 490.00 MiB/s [2024-10-13T02:28:10.582Z] [2024-10-13 02:28:10.541336] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.158 "name": "raid_bdev1", 00:13:52.158 "uuid": "a8fc65e0-7e60-46fa-b54f-07ab11bb3c2c", 00:13:52.158 "strip_size_kb": 0, 00:13:52.158 "state": "online", 00:13:52.158 "raid_level": "raid1", 00:13:52.158 "superblock": false, 00:13:52.158 "num_base_bdevs": 4, 00:13:52.158 "num_base_bdevs_discovered": 4, 00:13:52.158 "num_base_bdevs_operational": 4, 00:13:52.158 "process": { 00:13:52.158 "type": "rebuild", 00:13:52.158 "target": "spare", 00:13:52.158 "progress": { 00:13:52.158 "blocks": 14336, 00:13:52.158 "percent": 21 00:13:52.158 } 00:13:52.158 }, 00:13:52.158 "base_bdevs_list": [ 00:13:52.158 { 00:13:52.158 "name": "spare", 00:13:52.158 "uuid": "a59bb75a-97c0-5956-8888-ed3cbe9fd335", 00:13:52.158 "is_configured": true, 00:13:52.158 "data_offset": 0, 00:13:52.158 "data_size": 65536 00:13:52.158 }, 00:13:52.158 { 00:13:52.158 "name": "BaseBdev2", 00:13:52.158 "uuid": "4d354b2e-d0bd-5a81-83e7-6439aa9116f3", 00:13:52.158 "is_configured": true, 00:13:52.158 "data_offset": 0, 00:13:52.158 "data_size": 65536 00:13:52.158 }, 00:13:52.158 { 00:13:52.158 "name": "BaseBdev3", 00:13:52.158 "uuid": "b73c98eb-7e12-5eac-a64b-da3a3fb3f231", 00:13:52.158 "is_configured": true, 00:13:52.158 "data_offset": 0, 00:13:52.158 "data_size": 65536 00:13:52.158 }, 00:13:52.158 { 00:13:52.158 "name": "BaseBdev4", 00:13:52.158 "uuid": "015636c2-6fab-55ed-87ee-f12df8110294", 00:13:52.158 "is_configured": true, 00:13:52.158 "data_offset": 0, 00:13:52.158 "data_size": 65536 00:13:52.158 } 00:13:52.158 ] 00:13:52.158 }' 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.158 [2024-10-13 02:28:10.763167] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:52.158 [2024-10-13 02:28:10.783081] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:13:52.158 [2024-10-13 02:28:10.783114] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.158 02:28:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.418 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.418 "name": "raid_bdev1", 00:13:52.418 "uuid": "a8fc65e0-7e60-46fa-b54f-07ab11bb3c2c", 00:13:52.418 "strip_size_kb": 0, 00:13:52.418 "state": "online", 00:13:52.418 "raid_level": "raid1", 00:13:52.418 "superblock": false, 00:13:52.418 "num_base_bdevs": 4, 00:13:52.418 "num_base_bdevs_discovered": 3, 00:13:52.418 "num_base_bdevs_operational": 3, 00:13:52.418 "process": { 00:13:52.418 "type": "rebuild", 00:13:52.418 "target": "spare", 00:13:52.418 "progress": { 00:13:52.418 "blocks": 18432, 00:13:52.418 "percent": 28 00:13:52.418 } 00:13:52.418 }, 00:13:52.418 "base_bdevs_list": [ 00:13:52.418 { 00:13:52.419 "name": "spare", 00:13:52.419 "uuid": "a59bb75a-97c0-5956-8888-ed3cbe9fd335", 00:13:52.419 "is_configured": true, 00:13:52.419 "data_offset": 0, 00:13:52.419 "data_size": 65536 00:13:52.419 }, 00:13:52.419 { 00:13:52.419 "name": null, 00:13:52.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.419 "is_configured": false, 00:13:52.419 "data_offset": 0, 00:13:52.419 "data_size": 65536 00:13:52.419 }, 00:13:52.419 { 00:13:52.419 "name": "BaseBdev3", 00:13:52.419 "uuid": "b73c98eb-7e12-5eac-a64b-da3a3fb3f231", 00:13:52.419 "is_configured": true, 00:13:52.419 "data_offset": 0, 00:13:52.419 "data_size": 65536 00:13:52.419 }, 00:13:52.419 { 00:13:52.419 "name": "BaseBdev4", 00:13:52.419 "uuid": "015636c2-6fab-55ed-87ee-f12df8110294", 00:13:52.419 "is_configured": true, 00:13:52.419 "data_offset": 0, 00:13:52.419 "data_size": 65536 00:13:52.419 } 00:13:52.419 ] 00:13:52.419 }' 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.419 [2024-10-13 02:28:10.904359] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:52.419 [2024-10-13 02:28:10.904721] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=398 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.419 "name": "raid_bdev1", 00:13:52.419 "uuid": "a8fc65e0-7e60-46fa-b54f-07ab11bb3c2c", 00:13:52.419 "strip_size_kb": 0, 00:13:52.419 "state": "online", 00:13:52.419 "raid_level": "raid1", 00:13:52.419 "superblock": false, 00:13:52.419 "num_base_bdevs": 4, 00:13:52.419 "num_base_bdevs_discovered": 3, 00:13:52.419 "num_base_bdevs_operational": 3, 00:13:52.419 "process": { 00:13:52.419 "type": "rebuild", 00:13:52.419 "target": "spare", 00:13:52.419 "progress": { 00:13:52.419 "blocks": 20480, 00:13:52.419 "percent": 31 00:13:52.419 } 00:13:52.419 }, 00:13:52.419 "base_bdevs_list": [ 00:13:52.419 { 00:13:52.419 "name": "spare", 00:13:52.419 "uuid": "a59bb75a-97c0-5956-8888-ed3cbe9fd335", 00:13:52.419 "is_configured": true, 00:13:52.419 "data_offset": 0, 00:13:52.419 "data_size": 65536 00:13:52.419 }, 00:13:52.419 { 00:13:52.419 "name": null, 00:13:52.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.419 "is_configured": false, 00:13:52.419 "data_offset": 0, 00:13:52.419 "data_size": 65536 00:13:52.419 }, 00:13:52.419 { 00:13:52.419 "name": "BaseBdev3", 00:13:52.419 "uuid": "b73c98eb-7e12-5eac-a64b-da3a3fb3f231", 00:13:52.419 "is_configured": true, 00:13:52.419 "data_offset": 0, 00:13:52.419 "data_size": 65536 00:13:52.419 }, 00:13:52.419 { 00:13:52.419 "name": "BaseBdev4", 00:13:52.419 "uuid": "015636c2-6fab-55ed-87ee-f12df8110294", 00:13:52.419 "is_configured": true, 00:13:52.419 "data_offset": 0, 00:13:52.419 "data_size": 65536 00:13:52.419 } 00:13:52.419 ] 00:13:52.419 }' 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.419 02:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.419 02:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.419 02:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:52.679 [2024-10-13 02:28:11.342366] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:52.938 137.50 IOPS, 412.50 MiB/s [2024-10-13T02:28:11.622Z] [2024-10-13 02:28:11.563031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:53.507 02:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:53.507 02:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.507 02:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.507 02:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.507 02:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.507 02:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.507 02:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.507 02:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.507 02:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.507 02:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.507 02:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.507 02:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.507 "name": "raid_bdev1", 00:13:53.507 "uuid": "a8fc65e0-7e60-46fa-b54f-07ab11bb3c2c", 00:13:53.507 "strip_size_kb": 0, 00:13:53.507 "state": "online", 00:13:53.507 "raid_level": "raid1", 00:13:53.507 "superblock": false, 00:13:53.507 "num_base_bdevs": 4, 00:13:53.507 "num_base_bdevs_discovered": 3, 00:13:53.507 "num_base_bdevs_operational": 3, 00:13:53.507 "process": { 00:13:53.507 "type": "rebuild", 00:13:53.507 "target": "spare", 00:13:53.507 "progress": { 00:13:53.507 "blocks": 38912, 00:13:53.507 "percent": 59 00:13:53.507 } 00:13:53.507 }, 00:13:53.507 "base_bdevs_list": [ 00:13:53.507 { 00:13:53.507 "name": "spare", 00:13:53.507 "uuid": "a59bb75a-97c0-5956-8888-ed3cbe9fd335", 00:13:53.507 "is_configured": true, 00:13:53.507 "data_offset": 0, 00:13:53.507 "data_size": 65536 00:13:53.507 }, 00:13:53.507 { 00:13:53.507 "name": null, 00:13:53.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.507 "is_configured": false, 00:13:53.507 "data_offset": 0, 00:13:53.507 "data_size": 65536 00:13:53.507 }, 00:13:53.507 { 00:13:53.507 "name": "BaseBdev3", 00:13:53.507 "uuid": "b73c98eb-7e12-5eac-a64b-da3a3fb3f231", 00:13:53.507 "is_configured": true, 00:13:53.507 "data_offset": 0, 00:13:53.507 "data_size": 65536 00:13:53.507 }, 00:13:53.507 { 00:13:53.507 "name": "BaseBdev4", 00:13:53.507 "uuid": "015636c2-6fab-55ed-87ee-f12df8110294", 00:13:53.507 "is_configured": true, 00:13:53.507 "data_offset": 0, 00:13:53.507 "data_size": 65536 00:13:53.507 } 00:13:53.507 ] 00:13:53.507 }' 00:13:53.507 02:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.507 [2024-10-13 02:28:12.094178] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:53.507 02:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.507 02:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.507 02:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.507 02:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:54.026 121.20 IOPS, 363.60 MiB/s [2024-10-13T02:28:12.710Z] [2024-10-13 02:28:12.511193] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:54.596 02:28:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:54.596 02:28:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.596 02:28:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.596 02:28:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.596 02:28:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.596 02:28:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.596 02:28:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.596 02:28:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.596 02:28:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.596 02:28:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.596 02:28:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.596 02:28:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.596 "name": "raid_bdev1", 00:13:54.596 "uuid": "a8fc65e0-7e60-46fa-b54f-07ab11bb3c2c", 00:13:54.596 "strip_size_kb": 0, 00:13:54.596 "state": "online", 00:13:54.596 "raid_level": "raid1", 00:13:54.596 "superblock": false, 00:13:54.596 "num_base_bdevs": 4, 00:13:54.596 "num_base_bdevs_discovered": 3, 00:13:54.596 "num_base_bdevs_operational": 3, 00:13:54.596 "process": { 00:13:54.596 "type": "rebuild", 00:13:54.596 "target": "spare", 00:13:54.596 "progress": { 00:13:54.596 "blocks": 59392, 00:13:54.596 "percent": 90 00:13:54.596 } 00:13:54.596 }, 00:13:54.596 "base_bdevs_list": [ 00:13:54.596 { 00:13:54.596 "name": "spare", 00:13:54.596 "uuid": "a59bb75a-97c0-5956-8888-ed3cbe9fd335", 00:13:54.596 "is_configured": true, 00:13:54.596 "data_offset": 0, 00:13:54.596 "data_size": 65536 00:13:54.596 }, 00:13:54.596 { 00:13:54.596 "name": null, 00:13:54.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.596 "is_configured": false, 00:13:54.596 "data_offset": 0, 00:13:54.596 "data_size": 65536 00:13:54.596 }, 00:13:54.596 { 00:13:54.596 "name": "BaseBdev3", 00:13:54.596 "uuid": "b73c98eb-7e12-5eac-a64b-da3a3fb3f231", 00:13:54.596 "is_configured": true, 00:13:54.596 "data_offset": 0, 00:13:54.596 "data_size": 65536 00:13:54.596 }, 00:13:54.596 { 00:13:54.596 "name": "BaseBdev4", 00:13:54.596 "uuid": "015636c2-6fab-55ed-87ee-f12df8110294", 00:13:54.596 "is_configured": true, 00:13:54.596 "data_offset": 0, 00:13:54.596 "data_size": 65536 00:13:54.596 } 00:13:54.596 ] 00:13:54.596 }' 00:13:54.596 02:28:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.856 02:28:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.856 02:28:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.856 02:28:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.856 02:28:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:54.856 106.00 IOPS, 318.00 MiB/s [2024-10-13T02:28:13.540Z] [2024-10-13 02:28:13.497726] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:55.116 [2024-10-13 02:28:13.608255] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:55.116 [2024-10-13 02:28:13.611514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.686 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:55.686 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.686 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.686 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.686 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.686 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.686 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.686 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.686 02:28:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.686 02:28:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.945 96.00 IOPS, 288.00 MiB/s [2024-10-13T02:28:14.629Z] 02:28:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.945 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.945 "name": "raid_bdev1", 00:13:55.945 "uuid": "a8fc65e0-7e60-46fa-b54f-07ab11bb3c2c", 00:13:55.945 "strip_size_kb": 0, 00:13:55.945 "state": "online", 00:13:55.945 "raid_level": "raid1", 00:13:55.945 "superblock": false, 00:13:55.945 "num_base_bdevs": 4, 00:13:55.945 "num_base_bdevs_discovered": 3, 00:13:55.945 "num_base_bdevs_operational": 3, 00:13:55.945 "base_bdevs_list": [ 00:13:55.945 { 00:13:55.945 "name": "spare", 00:13:55.945 "uuid": "a59bb75a-97c0-5956-8888-ed3cbe9fd335", 00:13:55.945 "is_configured": true, 00:13:55.946 "data_offset": 0, 00:13:55.946 "data_size": 65536 00:13:55.946 }, 00:13:55.946 { 00:13:55.946 "name": null, 00:13:55.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.946 "is_configured": false, 00:13:55.946 "data_offset": 0, 00:13:55.946 "data_size": 65536 00:13:55.946 }, 00:13:55.946 { 00:13:55.946 "name": "BaseBdev3", 00:13:55.946 "uuid": "b73c98eb-7e12-5eac-a64b-da3a3fb3f231", 00:13:55.946 "is_configured": true, 00:13:55.946 "data_offset": 0, 00:13:55.946 "data_size": 65536 00:13:55.946 }, 00:13:55.946 { 00:13:55.946 "name": "BaseBdev4", 00:13:55.946 "uuid": "015636c2-6fab-55ed-87ee-f12df8110294", 00:13:55.946 "is_configured": true, 00:13:55.946 "data_offset": 0, 00:13:55.946 "data_size": 65536 00:13:55.946 } 00:13:55.946 ] 00:13:55.946 }' 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.946 "name": "raid_bdev1", 00:13:55.946 "uuid": "a8fc65e0-7e60-46fa-b54f-07ab11bb3c2c", 00:13:55.946 "strip_size_kb": 0, 00:13:55.946 "state": "online", 00:13:55.946 "raid_level": "raid1", 00:13:55.946 "superblock": false, 00:13:55.946 "num_base_bdevs": 4, 00:13:55.946 "num_base_bdevs_discovered": 3, 00:13:55.946 "num_base_bdevs_operational": 3, 00:13:55.946 "base_bdevs_list": [ 00:13:55.946 { 00:13:55.946 "name": "spare", 00:13:55.946 "uuid": "a59bb75a-97c0-5956-8888-ed3cbe9fd335", 00:13:55.946 "is_configured": true, 00:13:55.946 "data_offset": 0, 00:13:55.946 "data_size": 65536 00:13:55.946 }, 00:13:55.946 { 00:13:55.946 "name": null, 00:13:55.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.946 "is_configured": false, 00:13:55.946 "data_offset": 0, 00:13:55.946 "data_size": 65536 00:13:55.946 }, 00:13:55.946 { 00:13:55.946 "name": "BaseBdev3", 00:13:55.946 "uuid": "b73c98eb-7e12-5eac-a64b-da3a3fb3f231", 00:13:55.946 "is_configured": true, 00:13:55.946 "data_offset": 0, 00:13:55.946 "data_size": 65536 00:13:55.946 }, 00:13:55.946 { 00:13:55.946 "name": "BaseBdev4", 00:13:55.946 "uuid": "015636c2-6fab-55ed-87ee-f12df8110294", 00:13:55.946 "is_configured": true, 00:13:55.946 "data_offset": 0, 00:13:55.946 "data_size": 65536 00:13:55.946 } 00:13:55.946 ] 00:13:55.946 }' 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:55.946 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.206 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.206 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.206 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.206 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.206 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.206 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.206 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.206 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.206 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.206 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.206 02:28:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.206 02:28:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.206 02:28:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.206 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.206 "name": "raid_bdev1", 00:13:56.206 "uuid": "a8fc65e0-7e60-46fa-b54f-07ab11bb3c2c", 00:13:56.206 "strip_size_kb": 0, 00:13:56.206 "state": "online", 00:13:56.206 "raid_level": "raid1", 00:13:56.206 "superblock": false, 00:13:56.206 "num_base_bdevs": 4, 00:13:56.206 "num_base_bdevs_discovered": 3, 00:13:56.206 "num_base_bdevs_operational": 3, 00:13:56.206 "base_bdevs_list": [ 00:13:56.206 { 00:13:56.206 "name": "spare", 00:13:56.206 "uuid": "a59bb75a-97c0-5956-8888-ed3cbe9fd335", 00:13:56.206 "is_configured": true, 00:13:56.206 "data_offset": 0, 00:13:56.206 "data_size": 65536 00:13:56.206 }, 00:13:56.206 { 00:13:56.206 "name": null, 00:13:56.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.206 "is_configured": false, 00:13:56.206 "data_offset": 0, 00:13:56.206 "data_size": 65536 00:13:56.206 }, 00:13:56.206 { 00:13:56.206 "name": "BaseBdev3", 00:13:56.206 "uuid": "b73c98eb-7e12-5eac-a64b-da3a3fb3f231", 00:13:56.206 "is_configured": true, 00:13:56.206 "data_offset": 0, 00:13:56.206 "data_size": 65536 00:13:56.206 }, 00:13:56.206 { 00:13:56.206 "name": "BaseBdev4", 00:13:56.206 "uuid": "015636c2-6fab-55ed-87ee-f12df8110294", 00:13:56.206 "is_configured": true, 00:13:56.206 "data_offset": 0, 00:13:56.206 "data_size": 65536 00:13:56.206 } 00:13:56.206 ] 00:13:56.206 }' 00:13:56.206 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.206 02:28:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.480 02:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:56.480 02:28:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.480 02:28:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.480 [2024-10-13 02:28:14.985866] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:56.480 [2024-10-13 02:28:14.985917] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:56.480 00:13:56.480 Latency(us) 00:13:56.480 [2024-10-13T02:28:15.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.480 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:56.480 raid_bdev1 : 7.66 91.80 275.41 0.00 0.00 15523.12 279.03 114931.26 00:13:56.480 [2024-10-13T02:28:15.164Z] =================================================================================================================== 00:13:56.480 [2024-10-13T02:28:15.164Z] Total : 91.80 275.41 0.00 0.00 15523.12 279.03 114931.26 00:13:56.480 [2024-10-13 02:28:15.012795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.480 [2024-10-13 02:28:15.012839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.480 [2024-10-13 02:28:15.012962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.480 [2024-10-13 02:28:15.012978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:56.480 { 00:13:56.480 "results": [ 00:13:56.480 { 00:13:56.480 "job": "raid_bdev1", 00:13:56.480 "core_mask": "0x1", 00:13:56.480 "workload": "randrw", 00:13:56.480 "percentage": 50, 00:13:56.480 "status": "finished", 00:13:56.480 "queue_depth": 2, 00:13:56.480 "io_size": 3145728, 00:13:56.480 "runtime": 7.657572, 00:13:56.480 "iops": 91.80455632673124, 00:13:56.481 "mibps": 275.41366898019373, 00:13:56.481 "io_failed": 0, 00:13:56.481 "io_timeout": 0, 00:13:56.481 "avg_latency_us": 15523.115381987367, 00:13:56.481 "min_latency_us": 279.0288209606987, 00:13:56.481 "max_latency_us": 114931.2558951965 00:13:56.481 } 00:13:56.481 ], 00:13:56.481 "core_count": 1 00:13:56.481 } 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:56.481 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:56.743 /dev/nbd0 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.743 1+0 records in 00:13:56.743 1+0 records out 00:13:56.743 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607223 s, 6.7 MB/s 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:56.743 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:57.002 /dev/nbd1 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.002 1+0 records in 00:13:57.002 1+0 records out 00:13:57.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604168 s, 6.8 MB/s 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.002 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.262 02:28:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:57.522 /dev/nbd1 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.522 1+0 records in 00:13:57.522 1+0 records out 00:13:57.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525309 s, 7.8 MB/s 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.522 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:57.781 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:57.781 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:57.781 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:57.781 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.781 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.781 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:57.781 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:57.781 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.781 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:57.781 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.781 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:57.781 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:57.781 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:57.781 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.781 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89290 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 89290 ']' 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 89290 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89290 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:58.040 killing process with pid 89290 00:13:58.040 Received shutdown signal, test time was about 9.245151 seconds 00:13:58.040 00:13:58.040 Latency(us) 00:13:58.040 [2024-10-13T02:28:16.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.040 [2024-10-13T02:28:16.724Z] =================================================================================================================== 00:13:58.040 [2024-10-13T02:28:16.724Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89290' 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 89290 00:13:58.040 [2024-10-13 02:28:16.594245] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.040 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 89290 00:13:58.040 [2024-10-13 02:28:16.638006] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:58.300 00:13:58.300 real 0m11.215s 00:13:58.300 user 0m14.488s 00:13:58.300 sys 0m1.789s 00:13:58.300 ************************************ 00:13:58.300 END TEST raid_rebuild_test_io 00:13:58.300 ************************************ 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.300 02:28:16 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:13:58.300 02:28:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:58.300 02:28:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:58.300 02:28:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:58.300 ************************************ 00:13:58.300 START TEST raid_rebuild_test_sb_io 00:13:58.300 ************************************ 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89681 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89681 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 89681 ']' 00:13:58.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:58.300 02:28:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.560 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:58.560 Zero copy mechanism will not be used. 00:13:58.560 [2024-10-13 02:28:17.047984] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:58.560 [2024-10-13 02:28:17.048109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89681 ] 00:13:58.560 [2024-10-13 02:28:17.195118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.560 [2024-10-13 02:28:17.238362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.819 [2024-10-13 02:28:17.279200] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.819 [2024-10-13 02:28:17.279242] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.388 BaseBdev1_malloc 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.388 [2024-10-13 02:28:17.888250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:59.388 [2024-10-13 02:28:17.888396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.388 [2024-10-13 02:28:17.888424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:59.388 [2024-10-13 02:28:17.888446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.388 [2024-10-13 02:28:17.890448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.388 [2024-10-13 02:28:17.890488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:59.388 BaseBdev1 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.388 BaseBdev2_malloc 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.388 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.388 [2024-10-13 02:28:17.932354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:59.388 [2024-10-13 02:28:17.932453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.388 [2024-10-13 02:28:17.932498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:59.389 [2024-10-13 02:28:17.932520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.389 [2024-10-13 02:28:17.937279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.389 [2024-10-13 02:28:17.937350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:59.389 BaseBdev2 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.389 BaseBdev3_malloc 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.389 [2024-10-13 02:28:17.962860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:59.389 [2024-10-13 02:28:17.962980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.389 [2024-10-13 02:28:17.963010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:59.389 [2024-10-13 02:28:17.963019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.389 [2024-10-13 02:28:17.964977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.389 [2024-10-13 02:28:17.965011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:59.389 BaseBdev3 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.389 BaseBdev4_malloc 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.389 [2024-10-13 02:28:17.990995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:59.389 [2024-10-13 02:28:17.991041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.389 [2024-10-13 02:28:17.991060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:59.389 [2024-10-13 02:28:17.991068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.389 [2024-10-13 02:28:17.992980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.389 [2024-10-13 02:28:17.993011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:59.389 BaseBdev4 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.389 02:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.389 spare_malloc 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.389 spare_delay 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.389 [2024-10-13 02:28:18.031061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:59.389 [2024-10-13 02:28:18.031106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.389 [2024-10-13 02:28:18.031124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:59.389 [2024-10-13 02:28:18.031132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.389 [2024-10-13 02:28:18.033211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.389 [2024-10-13 02:28:18.033247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:59.389 spare 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.389 [2024-10-13 02:28:18.043114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.389 [2024-10-13 02:28:18.044937] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.389 [2024-10-13 02:28:18.044996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:59.389 [2024-10-13 02:28:18.045043] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:59.389 [2024-10-13 02:28:18.045195] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:59.389 [2024-10-13 02:28:18.045206] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:59.389 [2024-10-13 02:28:18.045458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:59.389 [2024-10-13 02:28:18.045595] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:59.389 [2024-10-13 02:28:18.045619] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:59.389 [2024-10-13 02:28:18.045728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.389 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.647 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.647 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.647 "name": "raid_bdev1", 00:13:59.647 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:13:59.647 "strip_size_kb": 0, 00:13:59.647 "state": "online", 00:13:59.647 "raid_level": "raid1", 00:13:59.647 "superblock": true, 00:13:59.647 "num_base_bdevs": 4, 00:13:59.647 "num_base_bdevs_discovered": 4, 00:13:59.647 "num_base_bdevs_operational": 4, 00:13:59.647 "base_bdevs_list": [ 00:13:59.647 { 00:13:59.647 "name": "BaseBdev1", 00:13:59.647 "uuid": "11d71279-0fb4-5c1c-8d30-91feccba9a35", 00:13:59.647 "is_configured": true, 00:13:59.647 "data_offset": 2048, 00:13:59.647 "data_size": 63488 00:13:59.647 }, 00:13:59.647 { 00:13:59.647 "name": "BaseBdev2", 00:13:59.647 "uuid": "bdced073-d68b-54c4-9c45-1d966d7ad6a2", 00:13:59.647 "is_configured": true, 00:13:59.647 "data_offset": 2048, 00:13:59.647 "data_size": 63488 00:13:59.647 }, 00:13:59.647 { 00:13:59.647 "name": "BaseBdev3", 00:13:59.647 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:13:59.647 "is_configured": true, 00:13:59.647 "data_offset": 2048, 00:13:59.647 "data_size": 63488 00:13:59.647 }, 00:13:59.647 { 00:13:59.647 "name": "BaseBdev4", 00:13:59.647 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:13:59.647 "is_configured": true, 00:13:59.647 "data_offset": 2048, 00:13:59.647 "data_size": 63488 00:13:59.647 } 00:13:59.647 ] 00:13:59.647 }' 00:13:59.647 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.647 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.906 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:59.906 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:59.906 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.906 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.906 [2024-10-13 02:28:18.542478] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.906 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.906 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:59.906 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.906 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:59.906 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.906 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.906 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.165 [2024-10-13 02:28:18.622051] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.165 "name": "raid_bdev1", 00:14:00.165 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:00.165 "strip_size_kb": 0, 00:14:00.165 "state": "online", 00:14:00.165 "raid_level": "raid1", 00:14:00.165 "superblock": true, 00:14:00.165 "num_base_bdevs": 4, 00:14:00.165 "num_base_bdevs_discovered": 3, 00:14:00.165 "num_base_bdevs_operational": 3, 00:14:00.165 "base_bdevs_list": [ 00:14:00.165 { 00:14:00.165 "name": null, 00:14:00.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.165 "is_configured": false, 00:14:00.165 "data_offset": 0, 00:14:00.165 "data_size": 63488 00:14:00.165 }, 00:14:00.165 { 00:14:00.165 "name": "BaseBdev2", 00:14:00.165 "uuid": "bdced073-d68b-54c4-9c45-1d966d7ad6a2", 00:14:00.165 "is_configured": true, 00:14:00.165 "data_offset": 2048, 00:14:00.165 "data_size": 63488 00:14:00.165 }, 00:14:00.165 { 00:14:00.165 "name": "BaseBdev3", 00:14:00.165 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:00.165 "is_configured": true, 00:14:00.165 "data_offset": 2048, 00:14:00.165 "data_size": 63488 00:14:00.165 }, 00:14:00.165 { 00:14:00.165 "name": "BaseBdev4", 00:14:00.165 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:00.165 "is_configured": true, 00:14:00.165 "data_offset": 2048, 00:14:00.165 "data_size": 63488 00:14:00.165 } 00:14:00.165 ] 00:14:00.165 }' 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.165 02:28:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.165 [2024-10-13 02:28:18.707845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:14:00.165 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:00.166 Zero copy mechanism will not be used. 00:14:00.166 Running I/O for 60 seconds... 00:14:00.425 02:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:00.425 02:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.425 02:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.425 [2024-10-13 02:28:19.065077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:00.425 02:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.425 02:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:00.684 [2024-10-13 02:28:19.111399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:14:00.684 [2024-10-13 02:28:19.113381] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:00.684 [2024-10-13 02:28:19.227671] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:00.684 [2024-10-13 02:28:19.228285] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:00.684 [2024-10-13 02:28:19.350377] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:00.684 [2024-10-13 02:28:19.350697] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:01.253 [2024-10-13 02:28:19.690834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:01.512 199.00 IOPS, 597.00 MiB/s [2024-10-13T02:28:20.196Z] 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.512 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.512 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.512 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.512 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.512 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.512 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.512 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.512 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.512 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.512 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.512 "name": "raid_bdev1", 00:14:01.512 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:01.512 "strip_size_kb": 0, 00:14:01.512 "state": "online", 00:14:01.512 "raid_level": "raid1", 00:14:01.512 "superblock": true, 00:14:01.512 "num_base_bdevs": 4, 00:14:01.512 "num_base_bdevs_discovered": 4, 00:14:01.512 "num_base_bdevs_operational": 4, 00:14:01.512 "process": { 00:14:01.512 "type": "rebuild", 00:14:01.512 "target": "spare", 00:14:01.512 "progress": { 00:14:01.512 "blocks": 16384, 00:14:01.512 "percent": 25 00:14:01.512 } 00:14:01.512 }, 00:14:01.512 "base_bdevs_list": [ 00:14:01.512 { 00:14:01.512 "name": "spare", 00:14:01.513 "uuid": "3977e6b8-2421-556d-94bd-bf3c690b85ca", 00:14:01.513 "is_configured": true, 00:14:01.513 "data_offset": 2048, 00:14:01.513 "data_size": 63488 00:14:01.513 }, 00:14:01.513 { 00:14:01.513 "name": "BaseBdev2", 00:14:01.513 "uuid": "bdced073-d68b-54c4-9c45-1d966d7ad6a2", 00:14:01.513 "is_configured": true, 00:14:01.513 "data_offset": 2048, 00:14:01.513 "data_size": 63488 00:14:01.513 }, 00:14:01.513 { 00:14:01.513 "name": "BaseBdev3", 00:14:01.513 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:01.513 "is_configured": true, 00:14:01.513 "data_offset": 2048, 00:14:01.513 "data_size": 63488 00:14:01.513 }, 00:14:01.513 { 00:14:01.513 "name": "BaseBdev4", 00:14:01.513 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:01.513 "is_configured": true, 00:14:01.513 "data_offset": 2048, 00:14:01.513 "data_size": 63488 00:14:01.513 } 00:14:01.513 ] 00:14:01.513 }' 00:14:01.513 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.772 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.772 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.772 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.772 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:01.772 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.772 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.772 [2024-10-13 02:28:20.268951] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.032 [2024-10-13 02:28:20.459940] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:02.032 [2024-10-13 02:28:20.469405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.032 [2024-10-13 02:28:20.469450] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.032 [2024-10-13 02:28:20.469475] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:02.032 [2024-10-13 02:28:20.492377] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.032 "name": "raid_bdev1", 00:14:02.032 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:02.032 "strip_size_kb": 0, 00:14:02.032 "state": "online", 00:14:02.032 "raid_level": "raid1", 00:14:02.032 "superblock": true, 00:14:02.032 "num_base_bdevs": 4, 00:14:02.032 "num_base_bdevs_discovered": 3, 00:14:02.032 "num_base_bdevs_operational": 3, 00:14:02.032 "base_bdevs_list": [ 00:14:02.032 { 00:14:02.032 "name": null, 00:14:02.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.032 "is_configured": false, 00:14:02.032 "data_offset": 0, 00:14:02.032 "data_size": 63488 00:14:02.032 }, 00:14:02.032 { 00:14:02.032 "name": "BaseBdev2", 00:14:02.032 "uuid": "bdced073-d68b-54c4-9c45-1d966d7ad6a2", 00:14:02.032 "is_configured": true, 00:14:02.032 "data_offset": 2048, 00:14:02.032 "data_size": 63488 00:14:02.032 }, 00:14:02.032 { 00:14:02.032 "name": "BaseBdev3", 00:14:02.032 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:02.032 "is_configured": true, 00:14:02.032 "data_offset": 2048, 00:14:02.032 "data_size": 63488 00:14:02.032 }, 00:14:02.032 { 00:14:02.032 "name": "BaseBdev4", 00:14:02.032 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:02.032 "is_configured": true, 00:14:02.032 "data_offset": 2048, 00:14:02.032 "data_size": 63488 00:14:02.032 } 00:14:02.032 ] 00:14:02.032 }' 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.032 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.292 161.00 IOPS, 483.00 MiB/s [2024-10-13T02:28:20.976Z] 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.292 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.292 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.292 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.292 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.292 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.292 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.292 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.292 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.292 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.292 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.292 "name": "raid_bdev1", 00:14:02.292 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:02.292 "strip_size_kb": 0, 00:14:02.292 "state": "online", 00:14:02.292 "raid_level": "raid1", 00:14:02.292 "superblock": true, 00:14:02.292 "num_base_bdevs": 4, 00:14:02.292 "num_base_bdevs_discovered": 3, 00:14:02.292 "num_base_bdevs_operational": 3, 00:14:02.292 "base_bdevs_list": [ 00:14:02.292 { 00:14:02.292 "name": null, 00:14:02.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.292 "is_configured": false, 00:14:02.292 "data_offset": 0, 00:14:02.292 "data_size": 63488 00:14:02.292 }, 00:14:02.292 { 00:14:02.292 "name": "BaseBdev2", 00:14:02.292 "uuid": "bdced073-d68b-54c4-9c45-1d966d7ad6a2", 00:14:02.292 "is_configured": true, 00:14:02.292 "data_offset": 2048, 00:14:02.292 "data_size": 63488 00:14:02.292 }, 00:14:02.292 { 00:14:02.292 "name": "BaseBdev3", 00:14:02.292 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:02.292 "is_configured": true, 00:14:02.292 "data_offset": 2048, 00:14:02.292 "data_size": 63488 00:14:02.292 }, 00:14:02.292 { 00:14:02.292 "name": "BaseBdev4", 00:14:02.292 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:02.292 "is_configured": true, 00:14:02.292 "data_offset": 2048, 00:14:02.292 "data_size": 63488 00:14:02.292 } 00:14:02.292 ] 00:14:02.292 }' 00:14:02.292 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.551 02:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.551 02:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.551 02:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.551 02:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:02.551 02:28:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.552 02:28:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.552 [2024-10-13 02:28:21.059126] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.552 02:28:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.552 02:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:02.552 [2024-10-13 02:28:21.122028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:14:02.552 [2024-10-13 02:28:21.123936] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:02.811 [2024-10-13 02:28:21.236722] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:02.811 [2024-10-13 02:28:21.237943] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:02.811 [2024-10-13 02:28:21.459851] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:02.811 [2024-10-13 02:28:21.460128] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:03.070 [2024-10-13 02:28:21.709795] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:03.329 149.33 IOPS, 448.00 MiB/s [2024-10-13T02:28:22.013Z] [2024-10-13 02:28:21.819273] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.589 [2024-10-13 02:28:22.137335] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:03.589 [2024-10-13 02:28:22.138559] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.589 "name": "raid_bdev1", 00:14:03.589 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:03.589 "strip_size_kb": 0, 00:14:03.589 "state": "online", 00:14:03.589 "raid_level": "raid1", 00:14:03.589 "superblock": true, 00:14:03.589 "num_base_bdevs": 4, 00:14:03.589 "num_base_bdevs_discovered": 4, 00:14:03.589 "num_base_bdevs_operational": 4, 00:14:03.589 "process": { 00:14:03.589 "type": "rebuild", 00:14:03.589 "target": "spare", 00:14:03.589 "progress": { 00:14:03.589 "blocks": 12288, 00:14:03.589 "percent": 19 00:14:03.589 } 00:14:03.589 }, 00:14:03.589 "base_bdevs_list": [ 00:14:03.589 { 00:14:03.589 "name": "spare", 00:14:03.589 "uuid": "3977e6b8-2421-556d-94bd-bf3c690b85ca", 00:14:03.589 "is_configured": true, 00:14:03.589 "data_offset": 2048, 00:14:03.589 "data_size": 63488 00:14:03.589 }, 00:14:03.589 { 00:14:03.589 "name": "BaseBdev2", 00:14:03.589 "uuid": "bdced073-d68b-54c4-9c45-1d966d7ad6a2", 00:14:03.589 "is_configured": true, 00:14:03.589 "data_offset": 2048, 00:14:03.589 "data_size": 63488 00:14:03.589 }, 00:14:03.589 { 00:14:03.589 "name": "BaseBdev3", 00:14:03.589 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:03.589 "is_configured": true, 00:14:03.589 "data_offset": 2048, 00:14:03.589 "data_size": 63488 00:14:03.589 }, 00:14:03.589 { 00:14:03.589 "name": "BaseBdev4", 00:14:03.589 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:03.589 "is_configured": true, 00:14:03.589 "data_offset": 2048, 00:14:03.589 "data_size": 63488 00:14:03.589 } 00:14:03.589 ] 00:14:03.589 }' 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:03.589 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.589 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.589 [2024-10-13 02:28:22.260993] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:03.848 [2024-10-13 02:28:22.355477] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:03.848 [2024-10-13 02:28:22.356147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:04.108 [2024-10-13 02:28:22.557916] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:14:04.108 [2024-10-13 02:28:22.557990] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:14:04.108 [2024-10-13 02:28:22.564227] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.108 "name": "raid_bdev1", 00:14:04.108 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:04.108 "strip_size_kb": 0, 00:14:04.108 "state": "online", 00:14:04.108 "raid_level": "raid1", 00:14:04.108 "superblock": true, 00:14:04.108 "num_base_bdevs": 4, 00:14:04.108 "num_base_bdevs_discovered": 3, 00:14:04.108 "num_base_bdevs_operational": 3, 00:14:04.108 "process": { 00:14:04.108 "type": "rebuild", 00:14:04.108 "target": "spare", 00:14:04.108 "progress": { 00:14:04.108 "blocks": 16384, 00:14:04.108 "percent": 25 00:14:04.108 } 00:14:04.108 }, 00:14:04.108 "base_bdevs_list": [ 00:14:04.108 { 00:14:04.108 "name": "spare", 00:14:04.108 "uuid": "3977e6b8-2421-556d-94bd-bf3c690b85ca", 00:14:04.108 "is_configured": true, 00:14:04.108 "data_offset": 2048, 00:14:04.108 "data_size": 63488 00:14:04.108 }, 00:14:04.108 { 00:14:04.108 "name": null, 00:14:04.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.108 "is_configured": false, 00:14:04.108 "data_offset": 0, 00:14:04.108 "data_size": 63488 00:14:04.108 }, 00:14:04.108 { 00:14:04.108 "name": "BaseBdev3", 00:14:04.108 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:04.108 "is_configured": true, 00:14:04.108 "data_offset": 2048, 00:14:04.108 "data_size": 63488 00:14:04.108 }, 00:14:04.108 { 00:14:04.108 "name": "BaseBdev4", 00:14:04.108 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:04.108 "is_configured": true, 00:14:04.108 "data_offset": 2048, 00:14:04.108 "data_size": 63488 00:14:04.108 } 00:14:04.108 ] 00:14:04.108 }' 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.108 127.50 IOPS, 382.50 MiB/s [2024-10-13T02:28:22.792Z] 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=410 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.108 "name": "raid_bdev1", 00:14:04.108 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:04.108 "strip_size_kb": 0, 00:14:04.108 "state": "online", 00:14:04.108 "raid_level": "raid1", 00:14:04.108 "superblock": true, 00:14:04.108 "num_base_bdevs": 4, 00:14:04.108 "num_base_bdevs_discovered": 3, 00:14:04.108 "num_base_bdevs_operational": 3, 00:14:04.108 "process": { 00:14:04.108 "type": "rebuild", 00:14:04.108 "target": "spare", 00:14:04.108 "progress": { 00:14:04.108 "blocks": 16384, 00:14:04.108 "percent": 25 00:14:04.108 } 00:14:04.108 }, 00:14:04.108 "base_bdevs_list": [ 00:14:04.108 { 00:14:04.108 "name": "spare", 00:14:04.108 "uuid": "3977e6b8-2421-556d-94bd-bf3c690b85ca", 00:14:04.108 "is_configured": true, 00:14:04.108 "data_offset": 2048, 00:14:04.108 "data_size": 63488 00:14:04.108 }, 00:14:04.108 { 00:14:04.108 "name": null, 00:14:04.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.108 "is_configured": false, 00:14:04.108 "data_offset": 0, 00:14:04.108 "data_size": 63488 00:14:04.108 }, 00:14:04.108 { 00:14:04.108 "name": "BaseBdev3", 00:14:04.108 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:04.108 "is_configured": true, 00:14:04.108 "data_offset": 2048, 00:14:04.108 "data_size": 63488 00:14:04.108 }, 00:14:04.108 { 00:14:04.108 "name": "BaseBdev4", 00:14:04.108 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:04.108 "is_configured": true, 00:14:04.108 "data_offset": 2048, 00:14:04.108 "data_size": 63488 00:14:04.108 } 00:14:04.108 ] 00:14:04.108 }' 00:14:04.108 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.368 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.368 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.368 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.368 02:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:04.368 [2024-10-13 02:28:22.906058] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:04.368 [2024-10-13 02:28:23.025237] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:05.306 [2024-10-13 02:28:23.693956] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:05.306 113.40 IOPS, 340.20 MiB/s [2024-10-13T02:28:23.990Z] 02:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.306 02:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.306 02:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.306 02:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.306 02:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.306 02:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.306 02:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.306 02:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.306 02:28:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.306 02:28:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.306 02:28:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.306 02:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.306 "name": "raid_bdev1", 00:14:05.306 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:05.306 "strip_size_kb": 0, 00:14:05.306 "state": "online", 00:14:05.306 "raid_level": "raid1", 00:14:05.306 "superblock": true, 00:14:05.306 "num_base_bdevs": 4, 00:14:05.306 "num_base_bdevs_discovered": 3, 00:14:05.306 "num_base_bdevs_operational": 3, 00:14:05.306 "process": { 00:14:05.306 "type": "rebuild", 00:14:05.306 "target": "spare", 00:14:05.306 "progress": { 00:14:05.306 "blocks": 34816, 00:14:05.306 "percent": 54 00:14:05.306 } 00:14:05.306 }, 00:14:05.306 "base_bdevs_list": [ 00:14:05.306 { 00:14:05.306 "name": "spare", 00:14:05.306 "uuid": "3977e6b8-2421-556d-94bd-bf3c690b85ca", 00:14:05.306 "is_configured": true, 00:14:05.306 "data_offset": 2048, 00:14:05.306 "data_size": 63488 00:14:05.306 }, 00:14:05.306 { 00:14:05.306 "name": null, 00:14:05.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.306 "is_configured": false, 00:14:05.306 "data_offset": 0, 00:14:05.306 "data_size": 63488 00:14:05.306 }, 00:14:05.306 { 00:14:05.306 "name": "BaseBdev3", 00:14:05.306 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:05.306 "is_configured": true, 00:14:05.306 "data_offset": 2048, 00:14:05.306 "data_size": 63488 00:14:05.306 }, 00:14:05.306 { 00:14:05.306 "name": "BaseBdev4", 00:14:05.306 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:05.306 "is_configured": true, 00:14:05.306 "data_offset": 2048, 00:14:05.306 "data_size": 63488 00:14:05.306 } 00:14:05.306 ] 00:14:05.306 }' 00:14:05.306 02:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.306 02:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.306 02:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.566 02:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.566 02:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.826 [2024-10-13 02:28:24.473134] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:06.344 101.50 IOPS, 304.50 MiB/s [2024-10-13T02:28:25.028Z] 02:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:06.344 02:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.344 02:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.344 02:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.344 02:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.344 02:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.344 02:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.344 02:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.344 02:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.344 02:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.603 02:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.603 02:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.603 "name": "raid_bdev1", 00:14:06.603 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:06.603 "strip_size_kb": 0, 00:14:06.603 "state": "online", 00:14:06.603 "raid_level": "raid1", 00:14:06.603 "superblock": true, 00:14:06.603 "num_base_bdevs": 4, 00:14:06.603 "num_base_bdevs_discovered": 3, 00:14:06.603 "num_base_bdevs_operational": 3, 00:14:06.603 "process": { 00:14:06.603 "type": "rebuild", 00:14:06.603 "target": "spare", 00:14:06.603 "progress": { 00:14:06.603 "blocks": 53248, 00:14:06.603 "percent": 83 00:14:06.603 } 00:14:06.603 }, 00:14:06.603 "base_bdevs_list": [ 00:14:06.603 { 00:14:06.603 "name": "spare", 00:14:06.603 "uuid": "3977e6b8-2421-556d-94bd-bf3c690b85ca", 00:14:06.603 "is_configured": true, 00:14:06.603 "data_offset": 2048, 00:14:06.603 "data_size": 63488 00:14:06.603 }, 00:14:06.603 { 00:14:06.603 "name": null, 00:14:06.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.603 "is_configured": false, 00:14:06.603 "data_offset": 0, 00:14:06.603 "data_size": 63488 00:14:06.603 }, 00:14:06.603 { 00:14:06.603 "name": "BaseBdev3", 00:14:06.603 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:06.603 "is_configured": true, 00:14:06.603 "data_offset": 2048, 00:14:06.603 "data_size": 63488 00:14:06.603 }, 00:14:06.603 { 00:14:06.603 "name": "BaseBdev4", 00:14:06.603 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:06.603 "is_configured": true, 00:14:06.603 "data_offset": 2048, 00:14:06.603 "data_size": 63488 00:14:06.603 } 00:14:06.603 ] 00:14:06.603 }' 00:14:06.603 02:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.603 02:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.603 02:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.603 [2024-10-13 02:28:25.133495] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:06.603 02:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.603 02:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:06.863 [2024-10-13 02:28:25.336534] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:07.122 [2024-10-13 02:28:25.557421] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:07.122 [2024-10-13 02:28:25.662163] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:07.122 [2024-10-13 02:28:25.665522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.691 92.00 IOPS, 276.00 MiB/s [2024-10-13T02:28:26.375Z] 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.691 "name": "raid_bdev1", 00:14:07.691 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:07.691 "strip_size_kb": 0, 00:14:07.691 "state": "online", 00:14:07.691 "raid_level": "raid1", 00:14:07.691 "superblock": true, 00:14:07.691 "num_base_bdevs": 4, 00:14:07.691 "num_base_bdevs_discovered": 3, 00:14:07.691 "num_base_bdevs_operational": 3, 00:14:07.691 "base_bdevs_list": [ 00:14:07.691 { 00:14:07.691 "name": "spare", 00:14:07.691 "uuid": "3977e6b8-2421-556d-94bd-bf3c690b85ca", 00:14:07.691 "is_configured": true, 00:14:07.691 "data_offset": 2048, 00:14:07.691 "data_size": 63488 00:14:07.691 }, 00:14:07.691 { 00:14:07.691 "name": null, 00:14:07.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.691 "is_configured": false, 00:14:07.691 "data_offset": 0, 00:14:07.691 "data_size": 63488 00:14:07.691 }, 00:14:07.691 { 00:14:07.691 "name": "BaseBdev3", 00:14:07.691 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:07.691 "is_configured": true, 00:14:07.691 "data_offset": 2048, 00:14:07.691 "data_size": 63488 00:14:07.691 }, 00:14:07.691 { 00:14:07.691 "name": "BaseBdev4", 00:14:07.691 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:07.691 "is_configured": true, 00:14:07.691 "data_offset": 2048, 00:14:07.691 "data_size": 63488 00:14:07.691 } 00:14:07.691 ] 00:14:07.691 }' 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.691 "name": "raid_bdev1", 00:14:07.691 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:07.691 "strip_size_kb": 0, 00:14:07.691 "state": "online", 00:14:07.691 "raid_level": "raid1", 00:14:07.691 "superblock": true, 00:14:07.691 "num_base_bdevs": 4, 00:14:07.691 "num_base_bdevs_discovered": 3, 00:14:07.691 "num_base_bdevs_operational": 3, 00:14:07.691 "base_bdevs_list": [ 00:14:07.691 { 00:14:07.691 "name": "spare", 00:14:07.691 "uuid": "3977e6b8-2421-556d-94bd-bf3c690b85ca", 00:14:07.691 "is_configured": true, 00:14:07.691 "data_offset": 2048, 00:14:07.691 "data_size": 63488 00:14:07.691 }, 00:14:07.691 { 00:14:07.691 "name": null, 00:14:07.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.691 "is_configured": false, 00:14:07.691 "data_offset": 0, 00:14:07.691 "data_size": 63488 00:14:07.691 }, 00:14:07.691 { 00:14:07.691 "name": "BaseBdev3", 00:14:07.691 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:07.691 "is_configured": true, 00:14:07.691 "data_offset": 2048, 00:14:07.691 "data_size": 63488 00:14:07.691 }, 00:14:07.691 { 00:14:07.691 "name": "BaseBdev4", 00:14:07.691 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:07.691 "is_configured": true, 00:14:07.691 "data_offset": 2048, 00:14:07.691 "data_size": 63488 00:14:07.691 } 00:14:07.691 ] 00:14:07.691 }' 00:14:07.691 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.951 "name": "raid_bdev1", 00:14:07.951 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:07.951 "strip_size_kb": 0, 00:14:07.951 "state": "online", 00:14:07.951 "raid_level": "raid1", 00:14:07.951 "superblock": true, 00:14:07.951 "num_base_bdevs": 4, 00:14:07.951 "num_base_bdevs_discovered": 3, 00:14:07.951 "num_base_bdevs_operational": 3, 00:14:07.951 "base_bdevs_list": [ 00:14:07.951 { 00:14:07.951 "name": "spare", 00:14:07.951 "uuid": "3977e6b8-2421-556d-94bd-bf3c690b85ca", 00:14:07.951 "is_configured": true, 00:14:07.951 "data_offset": 2048, 00:14:07.951 "data_size": 63488 00:14:07.951 }, 00:14:07.951 { 00:14:07.951 "name": null, 00:14:07.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.951 "is_configured": false, 00:14:07.951 "data_offset": 0, 00:14:07.951 "data_size": 63488 00:14:07.951 }, 00:14:07.951 { 00:14:07.951 "name": "BaseBdev3", 00:14:07.951 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:07.951 "is_configured": true, 00:14:07.951 "data_offset": 2048, 00:14:07.951 "data_size": 63488 00:14:07.951 }, 00:14:07.951 { 00:14:07.951 "name": "BaseBdev4", 00:14:07.951 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:07.951 "is_configured": true, 00:14:07.951 "data_offset": 2048, 00:14:07.951 "data_size": 63488 00:14:07.951 } 00:14:07.951 ] 00:14:07.951 }' 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.951 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.211 84.25 IOPS, 252.75 MiB/s [2024-10-13T02:28:26.895Z] 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:08.211 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.211 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.211 [2024-10-13 02:28:26.882344] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:08.211 [2024-10-13 02:28:26.882378] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.470 00:14:08.471 Latency(us) 00:14:08.471 [2024-10-13T02:28:27.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.471 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:08.471 raid_bdev1 : 8.26 83.45 250.36 0.00 0.00 17613.72 264.72 114931.26 00:14:08.471 [2024-10-13T02:28:27.155Z] =================================================================================================================== 00:14:08.471 [2024-10-13T02:28:27.155Z] Total : 83.45 250.36 0.00 0.00 17613.72 264.72 114931.26 00:14:08.471 { 00:14:08.471 "results": [ 00:14:08.471 { 00:14:08.471 "job": "raid_bdev1", 00:14:08.471 "core_mask": "0x1", 00:14:08.471 "workload": "randrw", 00:14:08.471 "percentage": 50, 00:14:08.471 "status": "finished", 00:14:08.471 "queue_depth": 2, 00:14:08.471 "io_size": 3145728, 00:14:08.471 "runtime": 8.256116, 00:14:08.471 "iops": 83.45328481334322, 00:14:08.471 "mibps": 250.35985444002966, 00:14:08.471 "io_failed": 0, 00:14:08.471 "io_timeout": 0, 00:14:08.471 "avg_latency_us": 17613.722879180637, 00:14:08.471 "min_latency_us": 264.71965065502184, 00:14:08.471 "max_latency_us": 114931.2558951965 00:14:08.471 } 00:14:08.471 ], 00:14:08.471 "core_count": 1 00:14:08.471 } 00:14:08.471 [2024-10-13 02:28:26.953292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.471 [2024-10-13 02:28:26.953332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.471 [2024-10-13 02:28:26.953428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.471 [2024-10-13 02:28:26.953438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:08.471 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.471 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:08.471 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.471 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.471 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.471 02:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.471 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:08.471 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:08.471 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:08.471 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:08.471 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.471 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:08.471 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:08.471 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:08.471 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:08.471 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:08.471 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:08.471 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:08.471 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:08.731 /dev/nbd0 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:08.731 1+0 records in 00:14:08.731 1+0 records out 00:14:08.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568711 s, 7.2 MB/s 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:08.731 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:08.991 /dev/nbd1 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:08.991 1+0 records in 00:14:08.991 1+0 records out 00:14:08.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598056 s, 6.8 MB/s 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.991 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:09.250 02:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:09.508 /dev/nbd1 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:09.509 1+0 records in 00:14:09.509 1+0 records out 00:14:09.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294956 s, 13.9 MB/s 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.509 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:09.768 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:09.768 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:09.768 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:09.768 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.768 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.768 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:09.768 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:09.768 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.768 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:09.768 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:09.768 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:09.768 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:09.768 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:09.768 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.768 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.027 [2024-10-13 02:28:28.522491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:10.027 [2024-10-13 02:28:28.522556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.027 [2024-10-13 02:28:28.522582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:10.027 [2024-10-13 02:28:28.522591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.027 [2024-10-13 02:28:28.524749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.027 [2024-10-13 02:28:28.524837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:10.027 [2024-10-13 02:28:28.524941] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:10.027 [2024-10-13 02:28:28.524980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:10.027 [2024-10-13 02:28:28.525088] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.027 [2024-10-13 02:28:28.525175] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:10.027 spare 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.027 [2024-10-13 02:28:28.625082] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:14:10.027 [2024-10-13 02:28:28.625105] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:10.027 [2024-10-13 02:28:28.625342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000337b0 00:14:10.027 [2024-10-13 02:28:28.625477] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:14:10.027 [2024-10-13 02:28:28.625491] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:14:10.027 [2024-10-13 02:28:28.625619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.027 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.028 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.028 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.028 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.028 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.028 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.028 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.028 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.028 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.028 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.028 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.028 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.028 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.028 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.028 "name": "raid_bdev1", 00:14:10.028 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:10.028 "strip_size_kb": 0, 00:14:10.028 "state": "online", 00:14:10.028 "raid_level": "raid1", 00:14:10.028 "superblock": true, 00:14:10.028 "num_base_bdevs": 4, 00:14:10.028 "num_base_bdevs_discovered": 3, 00:14:10.028 "num_base_bdevs_operational": 3, 00:14:10.028 "base_bdevs_list": [ 00:14:10.028 { 00:14:10.028 "name": "spare", 00:14:10.028 "uuid": "3977e6b8-2421-556d-94bd-bf3c690b85ca", 00:14:10.028 "is_configured": true, 00:14:10.028 "data_offset": 2048, 00:14:10.028 "data_size": 63488 00:14:10.028 }, 00:14:10.028 { 00:14:10.028 "name": null, 00:14:10.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.028 "is_configured": false, 00:14:10.028 "data_offset": 2048, 00:14:10.028 "data_size": 63488 00:14:10.028 }, 00:14:10.028 { 00:14:10.028 "name": "BaseBdev3", 00:14:10.028 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:10.028 "is_configured": true, 00:14:10.028 "data_offset": 2048, 00:14:10.028 "data_size": 63488 00:14:10.028 }, 00:14:10.028 { 00:14:10.028 "name": "BaseBdev4", 00:14:10.028 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:10.028 "is_configured": true, 00:14:10.028 "data_offset": 2048, 00:14:10.028 "data_size": 63488 00:14:10.028 } 00:14:10.028 ] 00:14:10.028 }' 00:14:10.028 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.028 02:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.597 "name": "raid_bdev1", 00:14:10.597 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:10.597 "strip_size_kb": 0, 00:14:10.597 "state": "online", 00:14:10.597 "raid_level": "raid1", 00:14:10.597 "superblock": true, 00:14:10.597 "num_base_bdevs": 4, 00:14:10.597 "num_base_bdevs_discovered": 3, 00:14:10.597 "num_base_bdevs_operational": 3, 00:14:10.597 "base_bdevs_list": [ 00:14:10.597 { 00:14:10.597 "name": "spare", 00:14:10.597 "uuid": "3977e6b8-2421-556d-94bd-bf3c690b85ca", 00:14:10.597 "is_configured": true, 00:14:10.597 "data_offset": 2048, 00:14:10.597 "data_size": 63488 00:14:10.597 }, 00:14:10.597 { 00:14:10.597 "name": null, 00:14:10.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.597 "is_configured": false, 00:14:10.597 "data_offset": 2048, 00:14:10.597 "data_size": 63488 00:14:10.597 }, 00:14:10.597 { 00:14:10.597 "name": "BaseBdev3", 00:14:10.597 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:10.597 "is_configured": true, 00:14:10.597 "data_offset": 2048, 00:14:10.597 "data_size": 63488 00:14:10.597 }, 00:14:10.597 { 00:14:10.597 "name": "BaseBdev4", 00:14:10.597 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:10.597 "is_configured": true, 00:14:10.597 "data_offset": 2048, 00:14:10.597 "data_size": 63488 00:14:10.597 } 00:14:10.597 ] 00:14:10.597 }' 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.597 [2024-10-13 02:28:29.273306] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.597 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:10.857 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.857 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.857 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.857 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.857 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:10.857 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.857 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.857 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.857 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.857 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.857 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.857 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.857 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.857 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.857 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.857 "name": "raid_bdev1", 00:14:10.857 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:10.857 "strip_size_kb": 0, 00:14:10.857 "state": "online", 00:14:10.857 "raid_level": "raid1", 00:14:10.857 "superblock": true, 00:14:10.857 "num_base_bdevs": 4, 00:14:10.857 "num_base_bdevs_discovered": 2, 00:14:10.857 "num_base_bdevs_operational": 2, 00:14:10.857 "base_bdevs_list": [ 00:14:10.857 { 00:14:10.857 "name": null, 00:14:10.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.857 "is_configured": false, 00:14:10.857 "data_offset": 0, 00:14:10.857 "data_size": 63488 00:14:10.857 }, 00:14:10.857 { 00:14:10.857 "name": null, 00:14:10.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.857 "is_configured": false, 00:14:10.857 "data_offset": 2048, 00:14:10.857 "data_size": 63488 00:14:10.857 }, 00:14:10.857 { 00:14:10.857 "name": "BaseBdev3", 00:14:10.857 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:10.857 "is_configured": true, 00:14:10.857 "data_offset": 2048, 00:14:10.857 "data_size": 63488 00:14:10.857 }, 00:14:10.857 { 00:14:10.857 "name": "BaseBdev4", 00:14:10.857 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:10.857 "is_configured": true, 00:14:10.857 "data_offset": 2048, 00:14:10.857 "data_size": 63488 00:14:10.857 } 00:14:10.857 ] 00:14:10.857 }' 00:14:10.857 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.857 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.117 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:11.117 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.117 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.117 [2024-10-13 02:28:29.708635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:11.117 [2024-10-13 02:28:29.708787] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:11.117 [2024-10-13 02:28:29.708801] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:11.117 [2024-10-13 02:28:29.708844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:11.117 [2024-10-13 02:28:29.712602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033880 00:14:11.117 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.117 02:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:11.117 [2024-10-13 02:28:29.714454] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:12.055 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.055 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.055 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.055 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.055 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.055 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.055 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.055 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.055 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.314 "name": "raid_bdev1", 00:14:12.314 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:12.314 "strip_size_kb": 0, 00:14:12.314 "state": "online", 00:14:12.314 "raid_level": "raid1", 00:14:12.314 "superblock": true, 00:14:12.314 "num_base_bdevs": 4, 00:14:12.314 "num_base_bdevs_discovered": 3, 00:14:12.314 "num_base_bdevs_operational": 3, 00:14:12.314 "process": { 00:14:12.314 "type": "rebuild", 00:14:12.314 "target": "spare", 00:14:12.314 "progress": { 00:14:12.314 "blocks": 20480, 00:14:12.314 "percent": 32 00:14:12.314 } 00:14:12.314 }, 00:14:12.314 "base_bdevs_list": [ 00:14:12.314 { 00:14:12.314 "name": "spare", 00:14:12.314 "uuid": "3977e6b8-2421-556d-94bd-bf3c690b85ca", 00:14:12.314 "is_configured": true, 00:14:12.314 "data_offset": 2048, 00:14:12.314 "data_size": 63488 00:14:12.314 }, 00:14:12.314 { 00:14:12.314 "name": null, 00:14:12.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.314 "is_configured": false, 00:14:12.314 "data_offset": 2048, 00:14:12.314 "data_size": 63488 00:14:12.314 }, 00:14:12.314 { 00:14:12.314 "name": "BaseBdev3", 00:14:12.314 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:12.314 "is_configured": true, 00:14:12.314 "data_offset": 2048, 00:14:12.314 "data_size": 63488 00:14:12.314 }, 00:14:12.314 { 00:14:12.314 "name": "BaseBdev4", 00:14:12.314 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:12.314 "is_configured": true, 00:14:12.314 "data_offset": 2048, 00:14:12.314 "data_size": 63488 00:14:12.314 } 00:14:12.314 ] 00:14:12.314 }' 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.314 [2024-10-13 02:28:30.879596] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:12.314 [2024-10-13 02:28:30.918330] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:12.314 [2024-10-13 02:28:30.918384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.314 [2024-10-13 02:28:30.918403] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:12.314 [2024-10-13 02:28:30.918409] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.314 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.315 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.315 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.315 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.315 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.315 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.315 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.315 "name": "raid_bdev1", 00:14:12.315 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:12.315 "strip_size_kb": 0, 00:14:12.315 "state": "online", 00:14:12.315 "raid_level": "raid1", 00:14:12.315 "superblock": true, 00:14:12.315 "num_base_bdevs": 4, 00:14:12.315 "num_base_bdevs_discovered": 2, 00:14:12.315 "num_base_bdevs_operational": 2, 00:14:12.315 "base_bdevs_list": [ 00:14:12.315 { 00:14:12.315 "name": null, 00:14:12.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.315 "is_configured": false, 00:14:12.315 "data_offset": 0, 00:14:12.315 "data_size": 63488 00:14:12.315 }, 00:14:12.315 { 00:14:12.315 "name": null, 00:14:12.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.315 "is_configured": false, 00:14:12.315 "data_offset": 2048, 00:14:12.315 "data_size": 63488 00:14:12.315 }, 00:14:12.315 { 00:14:12.315 "name": "BaseBdev3", 00:14:12.315 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:12.315 "is_configured": true, 00:14:12.315 "data_offset": 2048, 00:14:12.315 "data_size": 63488 00:14:12.315 }, 00:14:12.315 { 00:14:12.315 "name": "BaseBdev4", 00:14:12.315 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:12.315 "is_configured": true, 00:14:12.315 "data_offset": 2048, 00:14:12.315 "data_size": 63488 00:14:12.315 } 00:14:12.315 ] 00:14:12.315 }' 00:14:12.315 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.315 02:28:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.884 02:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:12.884 02:28:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.884 02:28:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.884 [2024-10-13 02:28:31.397618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:12.884 [2024-10-13 02:28:31.397678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.884 [2024-10-13 02:28:31.397705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:12.884 [2024-10-13 02:28:31.397714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.884 [2024-10-13 02:28:31.398145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.884 [2024-10-13 02:28:31.398162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:12.884 [2024-10-13 02:28:31.398250] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:12.884 [2024-10-13 02:28:31.398261] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:12.884 [2024-10-13 02:28:31.398272] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:12.884 [2024-10-13 02:28:31.398300] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:12.884 [2024-10-13 02:28:31.401835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033950 00:14:12.884 spare 00:14:12.884 02:28:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.884 02:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:12.884 [2024-10-13 02:28:31.403733] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:13.822 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.823 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.823 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.823 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.823 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.823 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.823 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.823 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.823 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.823 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.823 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.823 "name": "raid_bdev1", 00:14:13.823 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:13.823 "strip_size_kb": 0, 00:14:13.823 "state": "online", 00:14:13.823 "raid_level": "raid1", 00:14:13.823 "superblock": true, 00:14:13.823 "num_base_bdevs": 4, 00:14:13.823 "num_base_bdevs_discovered": 3, 00:14:13.823 "num_base_bdevs_operational": 3, 00:14:13.823 "process": { 00:14:13.823 "type": "rebuild", 00:14:13.823 "target": "spare", 00:14:13.823 "progress": { 00:14:13.823 "blocks": 20480, 00:14:13.823 "percent": 32 00:14:13.823 } 00:14:13.823 }, 00:14:13.823 "base_bdevs_list": [ 00:14:13.823 { 00:14:13.823 "name": "spare", 00:14:13.823 "uuid": "3977e6b8-2421-556d-94bd-bf3c690b85ca", 00:14:13.823 "is_configured": true, 00:14:13.823 "data_offset": 2048, 00:14:13.823 "data_size": 63488 00:14:13.823 }, 00:14:13.823 { 00:14:13.823 "name": null, 00:14:13.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.823 "is_configured": false, 00:14:13.823 "data_offset": 2048, 00:14:13.823 "data_size": 63488 00:14:13.823 }, 00:14:13.823 { 00:14:13.823 "name": "BaseBdev3", 00:14:13.823 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:13.823 "is_configured": true, 00:14:13.823 "data_offset": 2048, 00:14:13.823 "data_size": 63488 00:14:13.823 }, 00:14:13.823 { 00:14:13.823 "name": "BaseBdev4", 00:14:13.823 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:13.823 "is_configured": true, 00:14:13.823 "data_offset": 2048, 00:14:13.823 "data_size": 63488 00:14:13.823 } 00:14:13.823 ] 00:14:13.823 }' 00:14:13.823 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.082 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.082 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.082 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.082 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:14.082 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.082 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.082 [2024-10-13 02:28:32.552702] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.083 [2024-10-13 02:28:32.607673] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:14.083 [2024-10-13 02:28:32.607744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.083 [2024-10-13 02:28:32.607757] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.083 [2024-10-13 02:28:32.607766] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.083 "name": "raid_bdev1", 00:14:14.083 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:14.083 "strip_size_kb": 0, 00:14:14.083 "state": "online", 00:14:14.083 "raid_level": "raid1", 00:14:14.083 "superblock": true, 00:14:14.083 "num_base_bdevs": 4, 00:14:14.083 "num_base_bdevs_discovered": 2, 00:14:14.083 "num_base_bdevs_operational": 2, 00:14:14.083 "base_bdevs_list": [ 00:14:14.083 { 00:14:14.083 "name": null, 00:14:14.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.083 "is_configured": false, 00:14:14.083 "data_offset": 0, 00:14:14.083 "data_size": 63488 00:14:14.083 }, 00:14:14.083 { 00:14:14.083 "name": null, 00:14:14.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.083 "is_configured": false, 00:14:14.083 "data_offset": 2048, 00:14:14.083 "data_size": 63488 00:14:14.083 }, 00:14:14.083 { 00:14:14.083 "name": "BaseBdev3", 00:14:14.083 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:14.083 "is_configured": true, 00:14:14.083 "data_offset": 2048, 00:14:14.083 "data_size": 63488 00:14:14.083 }, 00:14:14.083 { 00:14:14.083 "name": "BaseBdev4", 00:14:14.083 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:14.083 "is_configured": true, 00:14:14.083 "data_offset": 2048, 00:14:14.083 "data_size": 63488 00:14:14.083 } 00:14:14.083 ] 00:14:14.083 }' 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.083 02:28:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.652 "name": "raid_bdev1", 00:14:14.652 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:14.652 "strip_size_kb": 0, 00:14:14.652 "state": "online", 00:14:14.652 "raid_level": "raid1", 00:14:14.652 "superblock": true, 00:14:14.652 "num_base_bdevs": 4, 00:14:14.652 "num_base_bdevs_discovered": 2, 00:14:14.652 "num_base_bdevs_operational": 2, 00:14:14.652 "base_bdevs_list": [ 00:14:14.652 { 00:14:14.652 "name": null, 00:14:14.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.652 "is_configured": false, 00:14:14.652 "data_offset": 0, 00:14:14.652 "data_size": 63488 00:14:14.652 }, 00:14:14.652 { 00:14:14.652 "name": null, 00:14:14.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.652 "is_configured": false, 00:14:14.652 "data_offset": 2048, 00:14:14.652 "data_size": 63488 00:14:14.652 }, 00:14:14.652 { 00:14:14.652 "name": "BaseBdev3", 00:14:14.652 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:14.652 "is_configured": true, 00:14:14.652 "data_offset": 2048, 00:14:14.652 "data_size": 63488 00:14:14.652 }, 00:14:14.652 { 00:14:14.652 "name": "BaseBdev4", 00:14:14.652 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:14.652 "is_configured": true, 00:14:14.652 "data_offset": 2048, 00:14:14.652 "data_size": 63488 00:14:14.652 } 00:14:14.652 ] 00:14:14.652 }' 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.652 [2024-10-13 02:28:33.218739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:14.652 [2024-10-13 02:28:33.218797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.652 [2024-10-13 02:28:33.218817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:14.652 [2024-10-13 02:28:33.218827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.652 [2024-10-13 02:28:33.219236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.652 [2024-10-13 02:28:33.219256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:14.652 [2024-10-13 02:28:33.219324] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:14.652 [2024-10-13 02:28:33.219339] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:14.652 [2024-10-13 02:28:33.219349] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:14.652 [2024-10-13 02:28:33.219360] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:14.652 BaseBdev1 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.652 02:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:15.591 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:15.591 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.591 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.591 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.591 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.591 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:15.591 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.591 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.591 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.591 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.591 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.591 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.591 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.591 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.591 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.851 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.851 "name": "raid_bdev1", 00:14:15.851 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:15.851 "strip_size_kb": 0, 00:14:15.851 "state": "online", 00:14:15.851 "raid_level": "raid1", 00:14:15.851 "superblock": true, 00:14:15.851 "num_base_bdevs": 4, 00:14:15.851 "num_base_bdevs_discovered": 2, 00:14:15.851 "num_base_bdevs_operational": 2, 00:14:15.851 "base_bdevs_list": [ 00:14:15.851 { 00:14:15.851 "name": null, 00:14:15.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.851 "is_configured": false, 00:14:15.851 "data_offset": 0, 00:14:15.851 "data_size": 63488 00:14:15.851 }, 00:14:15.851 { 00:14:15.851 "name": null, 00:14:15.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.851 "is_configured": false, 00:14:15.851 "data_offset": 2048, 00:14:15.851 "data_size": 63488 00:14:15.851 }, 00:14:15.851 { 00:14:15.851 "name": "BaseBdev3", 00:14:15.851 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:15.851 "is_configured": true, 00:14:15.851 "data_offset": 2048, 00:14:15.851 "data_size": 63488 00:14:15.851 }, 00:14:15.851 { 00:14:15.851 "name": "BaseBdev4", 00:14:15.851 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:15.851 "is_configured": true, 00:14:15.851 "data_offset": 2048, 00:14:15.851 "data_size": 63488 00:14:15.851 } 00:14:15.851 ] 00:14:15.851 }' 00:14:15.851 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.851 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.110 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:16.110 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.110 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:16.110 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:16.110 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.110 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.110 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.110 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.110 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.110 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.110 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.110 "name": "raid_bdev1", 00:14:16.110 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:16.110 "strip_size_kb": 0, 00:14:16.110 "state": "online", 00:14:16.110 "raid_level": "raid1", 00:14:16.110 "superblock": true, 00:14:16.110 "num_base_bdevs": 4, 00:14:16.110 "num_base_bdevs_discovered": 2, 00:14:16.110 "num_base_bdevs_operational": 2, 00:14:16.110 "base_bdevs_list": [ 00:14:16.110 { 00:14:16.110 "name": null, 00:14:16.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.110 "is_configured": false, 00:14:16.110 "data_offset": 0, 00:14:16.110 "data_size": 63488 00:14:16.110 }, 00:14:16.110 { 00:14:16.110 "name": null, 00:14:16.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.110 "is_configured": false, 00:14:16.110 "data_offset": 2048, 00:14:16.110 "data_size": 63488 00:14:16.110 }, 00:14:16.110 { 00:14:16.110 "name": "BaseBdev3", 00:14:16.110 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:16.110 "is_configured": true, 00:14:16.110 "data_offset": 2048, 00:14:16.110 "data_size": 63488 00:14:16.110 }, 00:14:16.110 { 00:14:16.110 "name": "BaseBdev4", 00:14:16.110 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:16.110 "is_configured": true, 00:14:16.110 "data_offset": 2048, 00:14:16.110 "data_size": 63488 00:14:16.110 } 00:14:16.110 ] 00:14:16.110 }' 00:14:16.110 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.110 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:16.110 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.370 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:16.370 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:16.370 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:14:16.370 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:16.370 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:16.370 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.370 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:16.370 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.370 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:16.370 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.370 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.370 [2024-10-13 02:28:34.852102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.370 [2024-10-13 02:28:34.852290] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:16.370 [2024-10-13 02:28:34.852313] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:16.370 request: 00:14:16.370 { 00:14:16.370 "base_bdev": "BaseBdev1", 00:14:16.370 "raid_bdev": "raid_bdev1", 00:14:16.370 "method": "bdev_raid_add_base_bdev", 00:14:16.370 "req_id": 1 00:14:16.370 } 00:14:16.370 Got JSON-RPC error response 00:14:16.370 response: 00:14:16.370 { 00:14:16.370 "code": -22, 00:14:16.370 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:16.370 } 00:14:16.370 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:16.370 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:14:16.370 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:16.370 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:16.370 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:16.370 02:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:17.309 02:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:17.309 02:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.309 02:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.309 02:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.309 02:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.309 02:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.309 02:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.309 02:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.309 02:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.309 02:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.309 02:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.309 02:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.309 02:28:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.309 02:28:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.309 02:28:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.309 02:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.309 "name": "raid_bdev1", 00:14:17.309 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:17.309 "strip_size_kb": 0, 00:14:17.309 "state": "online", 00:14:17.309 "raid_level": "raid1", 00:14:17.309 "superblock": true, 00:14:17.309 "num_base_bdevs": 4, 00:14:17.309 "num_base_bdevs_discovered": 2, 00:14:17.309 "num_base_bdevs_operational": 2, 00:14:17.309 "base_bdevs_list": [ 00:14:17.309 { 00:14:17.309 "name": null, 00:14:17.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.309 "is_configured": false, 00:14:17.309 "data_offset": 0, 00:14:17.309 "data_size": 63488 00:14:17.309 }, 00:14:17.309 { 00:14:17.309 "name": null, 00:14:17.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.309 "is_configured": false, 00:14:17.309 "data_offset": 2048, 00:14:17.309 "data_size": 63488 00:14:17.309 }, 00:14:17.309 { 00:14:17.309 "name": "BaseBdev3", 00:14:17.309 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:17.309 "is_configured": true, 00:14:17.309 "data_offset": 2048, 00:14:17.309 "data_size": 63488 00:14:17.309 }, 00:14:17.309 { 00:14:17.309 "name": "BaseBdev4", 00:14:17.309 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:17.309 "is_configured": true, 00:14:17.309 "data_offset": 2048, 00:14:17.309 "data_size": 63488 00:14:17.309 } 00:14:17.309 ] 00:14:17.309 }' 00:14:17.309 02:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.309 02:28:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.879 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:17.879 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.879 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:17.879 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:17.879 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.879 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.879 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.879 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.879 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.879 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.879 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.879 "name": "raid_bdev1", 00:14:17.879 "uuid": "73fe4ae7-c8fe-4a69-b3a4-5747ed830afd", 00:14:17.879 "strip_size_kb": 0, 00:14:17.879 "state": "online", 00:14:17.879 "raid_level": "raid1", 00:14:17.879 "superblock": true, 00:14:17.879 "num_base_bdevs": 4, 00:14:17.879 "num_base_bdevs_discovered": 2, 00:14:17.879 "num_base_bdevs_operational": 2, 00:14:17.879 "base_bdevs_list": [ 00:14:17.879 { 00:14:17.879 "name": null, 00:14:17.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.879 "is_configured": false, 00:14:17.879 "data_offset": 0, 00:14:17.879 "data_size": 63488 00:14:17.879 }, 00:14:17.879 { 00:14:17.879 "name": null, 00:14:17.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.879 "is_configured": false, 00:14:17.879 "data_offset": 2048, 00:14:17.879 "data_size": 63488 00:14:17.879 }, 00:14:17.879 { 00:14:17.879 "name": "BaseBdev3", 00:14:17.879 "uuid": "1955fb23-c47b-5f40-8fc2-3d0a210bc797", 00:14:17.879 "is_configured": true, 00:14:17.879 "data_offset": 2048, 00:14:17.879 "data_size": 63488 00:14:17.879 }, 00:14:17.879 { 00:14:17.879 "name": "BaseBdev4", 00:14:17.879 "uuid": "2be08ebf-471f-5e16-9050-13b94cb260a8", 00:14:17.879 "is_configured": true, 00:14:17.879 "data_offset": 2048, 00:14:17.879 "data_size": 63488 00:14:17.879 } 00:14:17.879 ] 00:14:17.879 }' 00:14:17.880 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.880 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:17.880 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.880 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:17.880 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89681 00:14:17.880 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 89681 ']' 00:14:17.880 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 89681 00:14:17.880 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:14:17.880 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:17.880 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89681 00:14:17.880 killing process with pid 89681 00:14:17.880 Received shutdown signal, test time was about 17.811306 seconds 00:14:17.880 00:14:17.880 Latency(us) 00:14:17.880 [2024-10-13T02:28:36.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.880 [2024-10-13T02:28:36.564Z] =================================================================================================================== 00:14:17.880 [2024-10-13T02:28:36.564Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:17.880 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:17.880 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:17.880 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89681' 00:14:17.880 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 89681 00:14:17.880 [2024-10-13 02:28:36.486938] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:17.880 [2024-10-13 02:28:36.487055] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.880 [2024-10-13 02:28:36.487127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.880 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 89681 00:14:17.880 [2024-10-13 02:28:36.487136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:14:17.880 [2024-10-13 02:28:36.533066] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.139 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:18.139 00:14:18.139 real 0m19.817s 00:14:18.139 user 0m26.326s 00:14:18.139 sys 0m2.602s 00:14:18.139 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:18.139 02:28:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.139 ************************************ 00:14:18.139 END TEST raid_rebuild_test_sb_io 00:14:18.139 ************************************ 00:14:18.400 02:28:36 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:18.400 02:28:36 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:18.400 02:28:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:18.400 02:28:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:18.400 02:28:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:18.400 ************************************ 00:14:18.400 START TEST raid5f_state_function_test 00:14:18.400 ************************************ 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90387 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:18.400 Process raid pid: 90387 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90387' 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90387 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 90387 ']' 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.400 02:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.400 [2024-10-13 02:28:36.935663] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:18.400 [2024-10-13 02:28:36.935783] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.660 [2024-10-13 02:28:37.083725] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.660 [2024-10-13 02:28:37.129833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.660 [2024-10-13 02:28:37.172263] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.660 [2024-10-13 02:28:37.172305] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.230 [2024-10-13 02:28:37.765634] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.230 [2024-10-13 02:28:37.765690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.230 [2024-10-13 02:28:37.765702] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.230 [2024-10-13 02:28:37.765711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.230 [2024-10-13 02:28:37.765717] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:19.230 [2024-10-13 02:28:37.765728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.230 "name": "Existed_Raid", 00:14:19.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.230 "strip_size_kb": 64, 00:14:19.230 "state": "configuring", 00:14:19.230 "raid_level": "raid5f", 00:14:19.230 "superblock": false, 00:14:19.230 "num_base_bdevs": 3, 00:14:19.230 "num_base_bdevs_discovered": 0, 00:14:19.230 "num_base_bdevs_operational": 3, 00:14:19.230 "base_bdevs_list": [ 00:14:19.230 { 00:14:19.230 "name": "BaseBdev1", 00:14:19.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.230 "is_configured": false, 00:14:19.230 "data_offset": 0, 00:14:19.230 "data_size": 0 00:14:19.230 }, 00:14:19.230 { 00:14:19.230 "name": "BaseBdev2", 00:14:19.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.230 "is_configured": false, 00:14:19.230 "data_offset": 0, 00:14:19.230 "data_size": 0 00:14:19.230 }, 00:14:19.230 { 00:14:19.230 "name": "BaseBdev3", 00:14:19.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.230 "is_configured": false, 00:14:19.230 "data_offset": 0, 00:14:19.230 "data_size": 0 00:14:19.230 } 00:14:19.230 ] 00:14:19.230 }' 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.230 02:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.799 [2024-10-13 02:28:38.216801] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.799 [2024-10-13 02:28:38.216851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.799 [2024-10-13 02:28:38.228780] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.799 [2024-10-13 02:28:38.228821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.799 [2024-10-13 02:28:38.228830] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.799 [2024-10-13 02:28:38.228839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.799 [2024-10-13 02:28:38.228845] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:19.799 [2024-10-13 02:28:38.228853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.799 [2024-10-13 02:28:38.249779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.799 BaseBdev1 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.799 [ 00:14:19.799 { 00:14:19.799 "name": "BaseBdev1", 00:14:19.799 "aliases": [ 00:14:19.799 "597a8f6c-811b-458a-abb9-e485182a1b8e" 00:14:19.799 ], 00:14:19.799 "product_name": "Malloc disk", 00:14:19.799 "block_size": 512, 00:14:19.799 "num_blocks": 65536, 00:14:19.799 "uuid": "597a8f6c-811b-458a-abb9-e485182a1b8e", 00:14:19.799 "assigned_rate_limits": { 00:14:19.799 "rw_ios_per_sec": 0, 00:14:19.799 "rw_mbytes_per_sec": 0, 00:14:19.799 "r_mbytes_per_sec": 0, 00:14:19.799 "w_mbytes_per_sec": 0 00:14:19.799 }, 00:14:19.799 "claimed": true, 00:14:19.799 "claim_type": "exclusive_write", 00:14:19.799 "zoned": false, 00:14:19.799 "supported_io_types": { 00:14:19.799 "read": true, 00:14:19.799 "write": true, 00:14:19.799 "unmap": true, 00:14:19.799 "flush": true, 00:14:19.799 "reset": true, 00:14:19.799 "nvme_admin": false, 00:14:19.799 "nvme_io": false, 00:14:19.799 "nvme_io_md": false, 00:14:19.799 "write_zeroes": true, 00:14:19.799 "zcopy": true, 00:14:19.799 "get_zone_info": false, 00:14:19.799 "zone_management": false, 00:14:19.799 "zone_append": false, 00:14:19.799 "compare": false, 00:14:19.799 "compare_and_write": false, 00:14:19.799 "abort": true, 00:14:19.799 "seek_hole": false, 00:14:19.799 "seek_data": false, 00:14:19.799 "copy": true, 00:14:19.799 "nvme_iov_md": false 00:14:19.799 }, 00:14:19.799 "memory_domains": [ 00:14:19.799 { 00:14:19.799 "dma_device_id": "system", 00:14:19.799 "dma_device_type": 1 00:14:19.799 }, 00:14:19.799 { 00:14:19.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.799 "dma_device_type": 2 00:14:19.799 } 00:14:19.799 ], 00:14:19.799 "driver_specific": {} 00:14:19.799 } 00:14:19.799 ] 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.799 "name": "Existed_Raid", 00:14:19.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.799 "strip_size_kb": 64, 00:14:19.799 "state": "configuring", 00:14:19.799 "raid_level": "raid5f", 00:14:19.799 "superblock": false, 00:14:19.799 "num_base_bdevs": 3, 00:14:19.799 "num_base_bdevs_discovered": 1, 00:14:19.799 "num_base_bdevs_operational": 3, 00:14:19.799 "base_bdevs_list": [ 00:14:19.799 { 00:14:19.799 "name": "BaseBdev1", 00:14:19.799 "uuid": "597a8f6c-811b-458a-abb9-e485182a1b8e", 00:14:19.799 "is_configured": true, 00:14:19.799 "data_offset": 0, 00:14:19.799 "data_size": 65536 00:14:19.799 }, 00:14:19.799 { 00:14:19.799 "name": "BaseBdev2", 00:14:19.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.799 "is_configured": false, 00:14:19.799 "data_offset": 0, 00:14:19.799 "data_size": 0 00:14:19.799 }, 00:14:19.799 { 00:14:19.799 "name": "BaseBdev3", 00:14:19.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.799 "is_configured": false, 00:14:19.799 "data_offset": 0, 00:14:19.799 "data_size": 0 00:14:19.799 } 00:14:19.799 ] 00:14:19.799 }' 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.799 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.059 [2024-10-13 02:28:38.717018] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:20.059 [2024-10-13 02:28:38.717076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.059 [2024-10-13 02:28:38.729040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.059 [2024-10-13 02:28:38.730904] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:20.059 [2024-10-13 02:28:38.730944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:20.059 [2024-10-13 02:28:38.730953] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:20.059 [2024-10-13 02:28:38.730980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.059 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.319 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.319 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.319 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.319 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.319 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.319 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.319 "name": "Existed_Raid", 00:14:20.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.319 "strip_size_kb": 64, 00:14:20.319 "state": "configuring", 00:14:20.319 "raid_level": "raid5f", 00:14:20.319 "superblock": false, 00:14:20.319 "num_base_bdevs": 3, 00:14:20.319 "num_base_bdevs_discovered": 1, 00:14:20.319 "num_base_bdevs_operational": 3, 00:14:20.319 "base_bdevs_list": [ 00:14:20.319 { 00:14:20.319 "name": "BaseBdev1", 00:14:20.319 "uuid": "597a8f6c-811b-458a-abb9-e485182a1b8e", 00:14:20.319 "is_configured": true, 00:14:20.319 "data_offset": 0, 00:14:20.319 "data_size": 65536 00:14:20.319 }, 00:14:20.319 { 00:14:20.319 "name": "BaseBdev2", 00:14:20.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.319 "is_configured": false, 00:14:20.319 "data_offset": 0, 00:14:20.319 "data_size": 0 00:14:20.320 }, 00:14:20.320 { 00:14:20.320 "name": "BaseBdev3", 00:14:20.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.320 "is_configured": false, 00:14:20.320 "data_offset": 0, 00:14:20.320 "data_size": 0 00:14:20.320 } 00:14:20.320 ] 00:14:20.320 }' 00:14:20.320 02:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.320 02:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.580 [2024-10-13 02:28:39.223359] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.580 BaseBdev2 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.580 [ 00:14:20.580 { 00:14:20.580 "name": "BaseBdev2", 00:14:20.580 "aliases": [ 00:14:20.580 "d6e6fc4a-34c5-4eab-9de7-1717dfee4e3b" 00:14:20.580 ], 00:14:20.580 "product_name": "Malloc disk", 00:14:20.580 "block_size": 512, 00:14:20.580 "num_blocks": 65536, 00:14:20.580 "uuid": "d6e6fc4a-34c5-4eab-9de7-1717dfee4e3b", 00:14:20.580 "assigned_rate_limits": { 00:14:20.580 "rw_ios_per_sec": 0, 00:14:20.580 "rw_mbytes_per_sec": 0, 00:14:20.580 "r_mbytes_per_sec": 0, 00:14:20.580 "w_mbytes_per_sec": 0 00:14:20.580 }, 00:14:20.580 "claimed": true, 00:14:20.580 "claim_type": "exclusive_write", 00:14:20.580 "zoned": false, 00:14:20.580 "supported_io_types": { 00:14:20.580 "read": true, 00:14:20.580 "write": true, 00:14:20.580 "unmap": true, 00:14:20.580 "flush": true, 00:14:20.580 "reset": true, 00:14:20.580 "nvme_admin": false, 00:14:20.580 "nvme_io": false, 00:14:20.580 "nvme_io_md": false, 00:14:20.580 "write_zeroes": true, 00:14:20.580 "zcopy": true, 00:14:20.580 "get_zone_info": false, 00:14:20.580 "zone_management": false, 00:14:20.580 "zone_append": false, 00:14:20.580 "compare": false, 00:14:20.580 "compare_and_write": false, 00:14:20.580 "abort": true, 00:14:20.580 "seek_hole": false, 00:14:20.580 "seek_data": false, 00:14:20.580 "copy": true, 00:14:20.580 "nvme_iov_md": false 00:14:20.580 }, 00:14:20.580 "memory_domains": [ 00:14:20.580 { 00:14:20.580 "dma_device_id": "system", 00:14:20.580 "dma_device_type": 1 00:14:20.580 }, 00:14:20.580 { 00:14:20.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.580 "dma_device_type": 2 00:14:20.580 } 00:14:20.580 ], 00:14:20.580 "driver_specific": {} 00:14:20.580 } 00:14:20.580 ] 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.580 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.840 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.840 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.840 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.840 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.840 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.840 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.840 "name": "Existed_Raid", 00:14:20.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.840 "strip_size_kb": 64, 00:14:20.840 "state": "configuring", 00:14:20.840 "raid_level": "raid5f", 00:14:20.840 "superblock": false, 00:14:20.840 "num_base_bdevs": 3, 00:14:20.840 "num_base_bdevs_discovered": 2, 00:14:20.840 "num_base_bdevs_operational": 3, 00:14:20.840 "base_bdevs_list": [ 00:14:20.840 { 00:14:20.840 "name": "BaseBdev1", 00:14:20.840 "uuid": "597a8f6c-811b-458a-abb9-e485182a1b8e", 00:14:20.840 "is_configured": true, 00:14:20.840 "data_offset": 0, 00:14:20.840 "data_size": 65536 00:14:20.840 }, 00:14:20.840 { 00:14:20.840 "name": "BaseBdev2", 00:14:20.840 "uuid": "d6e6fc4a-34c5-4eab-9de7-1717dfee4e3b", 00:14:20.840 "is_configured": true, 00:14:20.840 "data_offset": 0, 00:14:20.840 "data_size": 65536 00:14:20.840 }, 00:14:20.840 { 00:14:20.840 "name": "BaseBdev3", 00:14:20.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.840 "is_configured": false, 00:14:20.840 "data_offset": 0, 00:14:20.840 "data_size": 0 00:14:20.840 } 00:14:20.840 ] 00:14:20.840 }' 00:14:20.840 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.840 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.100 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.101 [2024-10-13 02:28:39.681775] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.101 [2024-10-13 02:28:39.681859] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:21.101 [2024-10-13 02:28:39.681883] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:21.101 [2024-10-13 02:28:39.682156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:14:21.101 [2024-10-13 02:28:39.682607] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:21.101 [2024-10-13 02:28:39.682627] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:14:21.101 [2024-10-13 02:28:39.682859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.101 BaseBdev3 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.101 [ 00:14:21.101 { 00:14:21.101 "name": "BaseBdev3", 00:14:21.101 "aliases": [ 00:14:21.101 "a41e24c7-b767-4bc1-b438-b5b7c2e57fa8" 00:14:21.101 ], 00:14:21.101 "product_name": "Malloc disk", 00:14:21.101 "block_size": 512, 00:14:21.101 "num_blocks": 65536, 00:14:21.101 "uuid": "a41e24c7-b767-4bc1-b438-b5b7c2e57fa8", 00:14:21.101 "assigned_rate_limits": { 00:14:21.101 "rw_ios_per_sec": 0, 00:14:21.101 "rw_mbytes_per_sec": 0, 00:14:21.101 "r_mbytes_per_sec": 0, 00:14:21.101 "w_mbytes_per_sec": 0 00:14:21.101 }, 00:14:21.101 "claimed": true, 00:14:21.101 "claim_type": "exclusive_write", 00:14:21.101 "zoned": false, 00:14:21.101 "supported_io_types": { 00:14:21.101 "read": true, 00:14:21.101 "write": true, 00:14:21.101 "unmap": true, 00:14:21.101 "flush": true, 00:14:21.101 "reset": true, 00:14:21.101 "nvme_admin": false, 00:14:21.101 "nvme_io": false, 00:14:21.101 "nvme_io_md": false, 00:14:21.101 "write_zeroes": true, 00:14:21.101 "zcopy": true, 00:14:21.101 "get_zone_info": false, 00:14:21.101 "zone_management": false, 00:14:21.101 "zone_append": false, 00:14:21.101 "compare": false, 00:14:21.101 "compare_and_write": false, 00:14:21.101 "abort": true, 00:14:21.101 "seek_hole": false, 00:14:21.101 "seek_data": false, 00:14:21.101 "copy": true, 00:14:21.101 "nvme_iov_md": false 00:14:21.101 }, 00:14:21.101 "memory_domains": [ 00:14:21.101 { 00:14:21.101 "dma_device_id": "system", 00:14:21.101 "dma_device_type": 1 00:14:21.101 }, 00:14:21.101 { 00:14:21.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.101 "dma_device_type": 2 00:14:21.101 } 00:14:21.101 ], 00:14:21.101 "driver_specific": {} 00:14:21.101 } 00:14:21.101 ] 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.101 "name": "Existed_Raid", 00:14:21.101 "uuid": "e6e748c8-892d-4d48-90ee-d22e4feec775", 00:14:21.101 "strip_size_kb": 64, 00:14:21.101 "state": "online", 00:14:21.101 "raid_level": "raid5f", 00:14:21.101 "superblock": false, 00:14:21.101 "num_base_bdevs": 3, 00:14:21.101 "num_base_bdevs_discovered": 3, 00:14:21.101 "num_base_bdevs_operational": 3, 00:14:21.101 "base_bdevs_list": [ 00:14:21.101 { 00:14:21.101 "name": "BaseBdev1", 00:14:21.101 "uuid": "597a8f6c-811b-458a-abb9-e485182a1b8e", 00:14:21.101 "is_configured": true, 00:14:21.101 "data_offset": 0, 00:14:21.101 "data_size": 65536 00:14:21.101 }, 00:14:21.101 { 00:14:21.101 "name": "BaseBdev2", 00:14:21.101 "uuid": "d6e6fc4a-34c5-4eab-9de7-1717dfee4e3b", 00:14:21.101 "is_configured": true, 00:14:21.101 "data_offset": 0, 00:14:21.101 "data_size": 65536 00:14:21.101 }, 00:14:21.101 { 00:14:21.101 "name": "BaseBdev3", 00:14:21.101 "uuid": "a41e24c7-b767-4bc1-b438-b5b7c2e57fa8", 00:14:21.101 "is_configured": true, 00:14:21.101 "data_offset": 0, 00:14:21.101 "data_size": 65536 00:14:21.101 } 00:14:21.101 ] 00:14:21.101 }' 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.101 02:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.672 [2024-10-13 02:28:40.141290] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:21.672 "name": "Existed_Raid", 00:14:21.672 "aliases": [ 00:14:21.672 "e6e748c8-892d-4d48-90ee-d22e4feec775" 00:14:21.672 ], 00:14:21.672 "product_name": "Raid Volume", 00:14:21.672 "block_size": 512, 00:14:21.672 "num_blocks": 131072, 00:14:21.672 "uuid": "e6e748c8-892d-4d48-90ee-d22e4feec775", 00:14:21.672 "assigned_rate_limits": { 00:14:21.672 "rw_ios_per_sec": 0, 00:14:21.672 "rw_mbytes_per_sec": 0, 00:14:21.672 "r_mbytes_per_sec": 0, 00:14:21.672 "w_mbytes_per_sec": 0 00:14:21.672 }, 00:14:21.672 "claimed": false, 00:14:21.672 "zoned": false, 00:14:21.672 "supported_io_types": { 00:14:21.672 "read": true, 00:14:21.672 "write": true, 00:14:21.672 "unmap": false, 00:14:21.672 "flush": false, 00:14:21.672 "reset": true, 00:14:21.672 "nvme_admin": false, 00:14:21.672 "nvme_io": false, 00:14:21.672 "nvme_io_md": false, 00:14:21.672 "write_zeroes": true, 00:14:21.672 "zcopy": false, 00:14:21.672 "get_zone_info": false, 00:14:21.672 "zone_management": false, 00:14:21.672 "zone_append": false, 00:14:21.672 "compare": false, 00:14:21.672 "compare_and_write": false, 00:14:21.672 "abort": false, 00:14:21.672 "seek_hole": false, 00:14:21.672 "seek_data": false, 00:14:21.672 "copy": false, 00:14:21.672 "nvme_iov_md": false 00:14:21.672 }, 00:14:21.672 "driver_specific": { 00:14:21.672 "raid": { 00:14:21.672 "uuid": "e6e748c8-892d-4d48-90ee-d22e4feec775", 00:14:21.672 "strip_size_kb": 64, 00:14:21.672 "state": "online", 00:14:21.672 "raid_level": "raid5f", 00:14:21.672 "superblock": false, 00:14:21.672 "num_base_bdevs": 3, 00:14:21.672 "num_base_bdevs_discovered": 3, 00:14:21.672 "num_base_bdevs_operational": 3, 00:14:21.672 "base_bdevs_list": [ 00:14:21.672 { 00:14:21.672 "name": "BaseBdev1", 00:14:21.672 "uuid": "597a8f6c-811b-458a-abb9-e485182a1b8e", 00:14:21.672 "is_configured": true, 00:14:21.672 "data_offset": 0, 00:14:21.672 "data_size": 65536 00:14:21.672 }, 00:14:21.672 { 00:14:21.672 "name": "BaseBdev2", 00:14:21.672 "uuid": "d6e6fc4a-34c5-4eab-9de7-1717dfee4e3b", 00:14:21.672 "is_configured": true, 00:14:21.672 "data_offset": 0, 00:14:21.672 "data_size": 65536 00:14:21.672 }, 00:14:21.672 { 00:14:21.672 "name": "BaseBdev3", 00:14:21.672 "uuid": "a41e24c7-b767-4bc1-b438-b5b7c2e57fa8", 00:14:21.672 "is_configured": true, 00:14:21.672 "data_offset": 0, 00:14:21.672 "data_size": 65536 00:14:21.672 } 00:14:21.672 ] 00:14:21.672 } 00:14:21.672 } 00:14:21.672 }' 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:21.672 BaseBdev2 00:14:21.672 BaseBdev3' 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.672 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.932 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.932 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.932 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.932 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:21.932 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.932 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.932 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.932 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.932 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.932 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.932 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:21.932 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.932 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.932 [2024-10-13 02:28:40.416596] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.932 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.932 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:21.932 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:21.932 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.933 "name": "Existed_Raid", 00:14:21.933 "uuid": "e6e748c8-892d-4d48-90ee-d22e4feec775", 00:14:21.933 "strip_size_kb": 64, 00:14:21.933 "state": "online", 00:14:21.933 "raid_level": "raid5f", 00:14:21.933 "superblock": false, 00:14:21.933 "num_base_bdevs": 3, 00:14:21.933 "num_base_bdevs_discovered": 2, 00:14:21.933 "num_base_bdevs_operational": 2, 00:14:21.933 "base_bdevs_list": [ 00:14:21.933 { 00:14:21.933 "name": null, 00:14:21.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.933 "is_configured": false, 00:14:21.933 "data_offset": 0, 00:14:21.933 "data_size": 65536 00:14:21.933 }, 00:14:21.933 { 00:14:21.933 "name": "BaseBdev2", 00:14:21.933 "uuid": "d6e6fc4a-34c5-4eab-9de7-1717dfee4e3b", 00:14:21.933 "is_configured": true, 00:14:21.933 "data_offset": 0, 00:14:21.933 "data_size": 65536 00:14:21.933 }, 00:14:21.933 { 00:14:21.933 "name": "BaseBdev3", 00:14:21.933 "uuid": "a41e24c7-b767-4bc1-b438-b5b7c2e57fa8", 00:14:21.933 "is_configured": true, 00:14:21.933 "data_offset": 0, 00:14:21.933 "data_size": 65536 00:14:21.933 } 00:14:21.933 ] 00:14:21.933 }' 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.933 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.192 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:22.192 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.454 [2024-10-13 02:28:40.931259] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:22.454 [2024-10-13 02:28:40.931361] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.454 [2024-10-13 02:28:40.942569] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.454 02:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.454 [2024-10-13 02:28:41.002527] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:22.454 [2024-10-13 02:28:41.002571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.454 BaseBdev2 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.454 [ 00:14:22.454 { 00:14:22.454 "name": "BaseBdev2", 00:14:22.454 "aliases": [ 00:14:22.454 "e1caf8f7-435a-4766-bb07-989ae0ddfd01" 00:14:22.454 ], 00:14:22.454 "product_name": "Malloc disk", 00:14:22.454 "block_size": 512, 00:14:22.454 "num_blocks": 65536, 00:14:22.454 "uuid": "e1caf8f7-435a-4766-bb07-989ae0ddfd01", 00:14:22.454 "assigned_rate_limits": { 00:14:22.454 "rw_ios_per_sec": 0, 00:14:22.454 "rw_mbytes_per_sec": 0, 00:14:22.454 "r_mbytes_per_sec": 0, 00:14:22.454 "w_mbytes_per_sec": 0 00:14:22.454 }, 00:14:22.454 "claimed": false, 00:14:22.454 "zoned": false, 00:14:22.454 "supported_io_types": { 00:14:22.454 "read": true, 00:14:22.454 "write": true, 00:14:22.454 "unmap": true, 00:14:22.454 "flush": true, 00:14:22.454 "reset": true, 00:14:22.454 "nvme_admin": false, 00:14:22.454 "nvme_io": false, 00:14:22.454 "nvme_io_md": false, 00:14:22.454 "write_zeroes": true, 00:14:22.454 "zcopy": true, 00:14:22.454 "get_zone_info": false, 00:14:22.454 "zone_management": false, 00:14:22.454 "zone_append": false, 00:14:22.454 "compare": false, 00:14:22.454 "compare_and_write": false, 00:14:22.454 "abort": true, 00:14:22.454 "seek_hole": false, 00:14:22.454 "seek_data": false, 00:14:22.454 "copy": true, 00:14:22.454 "nvme_iov_md": false 00:14:22.454 }, 00:14:22.454 "memory_domains": [ 00:14:22.454 { 00:14:22.454 "dma_device_id": "system", 00:14:22.454 "dma_device_type": 1 00:14:22.454 }, 00:14:22.454 { 00:14:22.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.454 "dma_device_type": 2 00:14:22.454 } 00:14:22.454 ], 00:14:22.454 "driver_specific": {} 00:14:22.454 } 00:14:22.454 ] 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.454 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.454 BaseBdev3 00:14:22.455 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.455 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:22.455 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:22.455 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:22.455 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:22.455 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:22.455 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:22.455 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:22.455 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.455 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.715 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.715 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:22.715 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.715 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.715 [ 00:14:22.715 { 00:14:22.715 "name": "BaseBdev3", 00:14:22.715 "aliases": [ 00:14:22.715 "fbbefe1a-4f00-4253-a06b-7444d3b61de9" 00:14:22.715 ], 00:14:22.715 "product_name": "Malloc disk", 00:14:22.715 "block_size": 512, 00:14:22.715 "num_blocks": 65536, 00:14:22.715 "uuid": "fbbefe1a-4f00-4253-a06b-7444d3b61de9", 00:14:22.715 "assigned_rate_limits": { 00:14:22.715 "rw_ios_per_sec": 0, 00:14:22.715 "rw_mbytes_per_sec": 0, 00:14:22.715 "r_mbytes_per_sec": 0, 00:14:22.715 "w_mbytes_per_sec": 0 00:14:22.715 }, 00:14:22.715 "claimed": false, 00:14:22.715 "zoned": false, 00:14:22.715 "supported_io_types": { 00:14:22.715 "read": true, 00:14:22.715 "write": true, 00:14:22.715 "unmap": true, 00:14:22.715 "flush": true, 00:14:22.715 "reset": true, 00:14:22.715 "nvme_admin": false, 00:14:22.715 "nvme_io": false, 00:14:22.715 "nvme_io_md": false, 00:14:22.715 "write_zeroes": true, 00:14:22.715 "zcopy": true, 00:14:22.715 "get_zone_info": false, 00:14:22.715 "zone_management": false, 00:14:22.715 "zone_append": false, 00:14:22.715 "compare": false, 00:14:22.715 "compare_and_write": false, 00:14:22.715 "abort": true, 00:14:22.715 "seek_hole": false, 00:14:22.715 "seek_data": false, 00:14:22.715 "copy": true, 00:14:22.715 "nvme_iov_md": false 00:14:22.715 }, 00:14:22.715 "memory_domains": [ 00:14:22.715 { 00:14:22.715 "dma_device_id": "system", 00:14:22.715 "dma_device_type": 1 00:14:22.715 }, 00:14:22.715 { 00:14:22.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.715 "dma_device_type": 2 00:14:22.715 } 00:14:22.715 ], 00:14:22.715 "driver_specific": {} 00:14:22.715 } 00:14:22.715 ] 00:14:22.715 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.715 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:22.715 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:22.715 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.715 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:22.715 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.715 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.716 [2024-10-13 02:28:41.169680] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.716 [2024-10-13 02:28:41.169781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.716 [2024-10-13 02:28:41.169846] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.716 [2024-10-13 02:28:41.171672] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.716 "name": "Existed_Raid", 00:14:22.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.716 "strip_size_kb": 64, 00:14:22.716 "state": "configuring", 00:14:22.716 "raid_level": "raid5f", 00:14:22.716 "superblock": false, 00:14:22.716 "num_base_bdevs": 3, 00:14:22.716 "num_base_bdevs_discovered": 2, 00:14:22.716 "num_base_bdevs_operational": 3, 00:14:22.716 "base_bdevs_list": [ 00:14:22.716 { 00:14:22.716 "name": "BaseBdev1", 00:14:22.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.716 "is_configured": false, 00:14:22.716 "data_offset": 0, 00:14:22.716 "data_size": 0 00:14:22.716 }, 00:14:22.716 { 00:14:22.716 "name": "BaseBdev2", 00:14:22.716 "uuid": "e1caf8f7-435a-4766-bb07-989ae0ddfd01", 00:14:22.716 "is_configured": true, 00:14:22.716 "data_offset": 0, 00:14:22.716 "data_size": 65536 00:14:22.716 }, 00:14:22.716 { 00:14:22.716 "name": "BaseBdev3", 00:14:22.716 "uuid": "fbbefe1a-4f00-4253-a06b-7444d3b61de9", 00:14:22.716 "is_configured": true, 00:14:22.716 "data_offset": 0, 00:14:22.716 "data_size": 65536 00:14:22.716 } 00:14:22.716 ] 00:14:22.716 }' 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.716 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.285 [2024-10-13 02:28:41.672829] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.285 "name": "Existed_Raid", 00:14:23.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.285 "strip_size_kb": 64, 00:14:23.285 "state": "configuring", 00:14:23.285 "raid_level": "raid5f", 00:14:23.285 "superblock": false, 00:14:23.285 "num_base_bdevs": 3, 00:14:23.285 "num_base_bdevs_discovered": 1, 00:14:23.285 "num_base_bdevs_operational": 3, 00:14:23.285 "base_bdevs_list": [ 00:14:23.285 { 00:14:23.285 "name": "BaseBdev1", 00:14:23.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.285 "is_configured": false, 00:14:23.285 "data_offset": 0, 00:14:23.285 "data_size": 0 00:14:23.285 }, 00:14:23.285 { 00:14:23.285 "name": null, 00:14:23.285 "uuid": "e1caf8f7-435a-4766-bb07-989ae0ddfd01", 00:14:23.285 "is_configured": false, 00:14:23.285 "data_offset": 0, 00:14:23.285 "data_size": 65536 00:14:23.285 }, 00:14:23.285 { 00:14:23.285 "name": "BaseBdev3", 00:14:23.285 "uuid": "fbbefe1a-4f00-4253-a06b-7444d3b61de9", 00:14:23.285 "is_configured": true, 00:14:23.285 "data_offset": 0, 00:14:23.285 "data_size": 65536 00:14:23.285 } 00:14:23.285 ] 00:14:23.285 }' 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.285 02:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.545 [2024-10-13 02:28:42.174986] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.545 BaseBdev1 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.545 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.545 [ 00:14:23.545 { 00:14:23.545 "name": "BaseBdev1", 00:14:23.545 "aliases": [ 00:14:23.545 "513d1788-1c85-4089-8570-50d642e8411f" 00:14:23.545 ], 00:14:23.545 "product_name": "Malloc disk", 00:14:23.545 "block_size": 512, 00:14:23.545 "num_blocks": 65536, 00:14:23.545 "uuid": "513d1788-1c85-4089-8570-50d642e8411f", 00:14:23.545 "assigned_rate_limits": { 00:14:23.545 "rw_ios_per_sec": 0, 00:14:23.545 "rw_mbytes_per_sec": 0, 00:14:23.545 "r_mbytes_per_sec": 0, 00:14:23.545 "w_mbytes_per_sec": 0 00:14:23.545 }, 00:14:23.545 "claimed": true, 00:14:23.545 "claim_type": "exclusive_write", 00:14:23.545 "zoned": false, 00:14:23.545 "supported_io_types": { 00:14:23.545 "read": true, 00:14:23.545 "write": true, 00:14:23.545 "unmap": true, 00:14:23.545 "flush": true, 00:14:23.545 "reset": true, 00:14:23.545 "nvme_admin": false, 00:14:23.545 "nvme_io": false, 00:14:23.545 "nvme_io_md": false, 00:14:23.545 "write_zeroes": true, 00:14:23.545 "zcopy": true, 00:14:23.545 "get_zone_info": false, 00:14:23.545 "zone_management": false, 00:14:23.545 "zone_append": false, 00:14:23.546 "compare": false, 00:14:23.546 "compare_and_write": false, 00:14:23.546 "abort": true, 00:14:23.546 "seek_hole": false, 00:14:23.546 "seek_data": false, 00:14:23.546 "copy": true, 00:14:23.546 "nvme_iov_md": false 00:14:23.546 }, 00:14:23.546 "memory_domains": [ 00:14:23.546 { 00:14:23.546 "dma_device_id": "system", 00:14:23.546 "dma_device_type": 1 00:14:23.546 }, 00:14:23.546 { 00:14:23.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.546 "dma_device_type": 2 00:14:23.546 } 00:14:23.546 ], 00:14:23.546 "driver_specific": {} 00:14:23.546 } 00:14:23.546 ] 00:14:23.546 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.546 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:23.546 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:23.546 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.546 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.546 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.546 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.546 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.546 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.546 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.546 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.546 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.546 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.546 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.546 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.546 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.805 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.805 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.805 "name": "Existed_Raid", 00:14:23.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.805 "strip_size_kb": 64, 00:14:23.805 "state": "configuring", 00:14:23.805 "raid_level": "raid5f", 00:14:23.805 "superblock": false, 00:14:23.805 "num_base_bdevs": 3, 00:14:23.805 "num_base_bdevs_discovered": 2, 00:14:23.805 "num_base_bdevs_operational": 3, 00:14:23.805 "base_bdevs_list": [ 00:14:23.805 { 00:14:23.806 "name": "BaseBdev1", 00:14:23.806 "uuid": "513d1788-1c85-4089-8570-50d642e8411f", 00:14:23.806 "is_configured": true, 00:14:23.806 "data_offset": 0, 00:14:23.806 "data_size": 65536 00:14:23.806 }, 00:14:23.806 { 00:14:23.806 "name": null, 00:14:23.806 "uuid": "e1caf8f7-435a-4766-bb07-989ae0ddfd01", 00:14:23.806 "is_configured": false, 00:14:23.806 "data_offset": 0, 00:14:23.806 "data_size": 65536 00:14:23.806 }, 00:14:23.806 { 00:14:23.806 "name": "BaseBdev3", 00:14:23.806 "uuid": "fbbefe1a-4f00-4253-a06b-7444d3b61de9", 00:14:23.806 "is_configured": true, 00:14:23.806 "data_offset": 0, 00:14:23.806 "data_size": 65536 00:14:23.806 } 00:14:23.806 ] 00:14:23.806 }' 00:14:23.806 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.806 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.066 [2024-10-13 02:28:42.670155] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.066 "name": "Existed_Raid", 00:14:24.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.066 "strip_size_kb": 64, 00:14:24.066 "state": "configuring", 00:14:24.066 "raid_level": "raid5f", 00:14:24.066 "superblock": false, 00:14:24.066 "num_base_bdevs": 3, 00:14:24.066 "num_base_bdevs_discovered": 1, 00:14:24.066 "num_base_bdevs_operational": 3, 00:14:24.066 "base_bdevs_list": [ 00:14:24.066 { 00:14:24.066 "name": "BaseBdev1", 00:14:24.066 "uuid": "513d1788-1c85-4089-8570-50d642e8411f", 00:14:24.066 "is_configured": true, 00:14:24.066 "data_offset": 0, 00:14:24.066 "data_size": 65536 00:14:24.066 }, 00:14:24.066 { 00:14:24.066 "name": null, 00:14:24.066 "uuid": "e1caf8f7-435a-4766-bb07-989ae0ddfd01", 00:14:24.066 "is_configured": false, 00:14:24.066 "data_offset": 0, 00:14:24.066 "data_size": 65536 00:14:24.066 }, 00:14:24.066 { 00:14:24.066 "name": null, 00:14:24.066 "uuid": "fbbefe1a-4f00-4253-a06b-7444d3b61de9", 00:14:24.066 "is_configured": false, 00:14:24.066 "data_offset": 0, 00:14:24.066 "data_size": 65536 00:14:24.066 } 00:14:24.066 ] 00:14:24.066 }' 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.066 02:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.636 [2024-10-13 02:28:43.221236] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.636 "name": "Existed_Raid", 00:14:24.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.636 "strip_size_kb": 64, 00:14:24.636 "state": "configuring", 00:14:24.636 "raid_level": "raid5f", 00:14:24.636 "superblock": false, 00:14:24.636 "num_base_bdevs": 3, 00:14:24.636 "num_base_bdevs_discovered": 2, 00:14:24.636 "num_base_bdevs_operational": 3, 00:14:24.636 "base_bdevs_list": [ 00:14:24.636 { 00:14:24.636 "name": "BaseBdev1", 00:14:24.636 "uuid": "513d1788-1c85-4089-8570-50d642e8411f", 00:14:24.636 "is_configured": true, 00:14:24.636 "data_offset": 0, 00:14:24.636 "data_size": 65536 00:14:24.636 }, 00:14:24.636 { 00:14:24.636 "name": null, 00:14:24.636 "uuid": "e1caf8f7-435a-4766-bb07-989ae0ddfd01", 00:14:24.636 "is_configured": false, 00:14:24.636 "data_offset": 0, 00:14:24.636 "data_size": 65536 00:14:24.636 }, 00:14:24.636 { 00:14:24.636 "name": "BaseBdev3", 00:14:24.636 "uuid": "fbbefe1a-4f00-4253-a06b-7444d3b61de9", 00:14:24.636 "is_configured": true, 00:14:24.636 "data_offset": 0, 00:14:24.636 "data_size": 65536 00:14:24.636 } 00:14:24.636 ] 00:14:24.636 }' 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.636 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.269 [2024-10-13 02:28:43.748378] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.269 "name": "Existed_Raid", 00:14:25.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.269 "strip_size_kb": 64, 00:14:25.269 "state": "configuring", 00:14:25.269 "raid_level": "raid5f", 00:14:25.269 "superblock": false, 00:14:25.269 "num_base_bdevs": 3, 00:14:25.269 "num_base_bdevs_discovered": 1, 00:14:25.269 "num_base_bdevs_operational": 3, 00:14:25.269 "base_bdevs_list": [ 00:14:25.269 { 00:14:25.269 "name": null, 00:14:25.269 "uuid": "513d1788-1c85-4089-8570-50d642e8411f", 00:14:25.269 "is_configured": false, 00:14:25.269 "data_offset": 0, 00:14:25.269 "data_size": 65536 00:14:25.269 }, 00:14:25.269 { 00:14:25.269 "name": null, 00:14:25.269 "uuid": "e1caf8f7-435a-4766-bb07-989ae0ddfd01", 00:14:25.269 "is_configured": false, 00:14:25.269 "data_offset": 0, 00:14:25.269 "data_size": 65536 00:14:25.269 }, 00:14:25.269 { 00:14:25.269 "name": "BaseBdev3", 00:14:25.269 "uuid": "fbbefe1a-4f00-4253-a06b-7444d3b61de9", 00:14:25.269 "is_configured": true, 00:14:25.269 "data_offset": 0, 00:14:25.269 "data_size": 65536 00:14:25.269 } 00:14:25.269 ] 00:14:25.269 }' 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.269 02:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.529 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.529 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:25.529 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.529 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.789 [2024-10-13 02:28:44.254081] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.789 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.789 "name": "Existed_Raid", 00:14:25.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.789 "strip_size_kb": 64, 00:14:25.789 "state": "configuring", 00:14:25.789 "raid_level": "raid5f", 00:14:25.789 "superblock": false, 00:14:25.789 "num_base_bdevs": 3, 00:14:25.789 "num_base_bdevs_discovered": 2, 00:14:25.789 "num_base_bdevs_operational": 3, 00:14:25.789 "base_bdevs_list": [ 00:14:25.789 { 00:14:25.789 "name": null, 00:14:25.789 "uuid": "513d1788-1c85-4089-8570-50d642e8411f", 00:14:25.789 "is_configured": false, 00:14:25.789 "data_offset": 0, 00:14:25.789 "data_size": 65536 00:14:25.789 }, 00:14:25.789 { 00:14:25.789 "name": "BaseBdev2", 00:14:25.789 "uuid": "e1caf8f7-435a-4766-bb07-989ae0ddfd01", 00:14:25.789 "is_configured": true, 00:14:25.789 "data_offset": 0, 00:14:25.789 "data_size": 65536 00:14:25.789 }, 00:14:25.789 { 00:14:25.789 "name": "BaseBdev3", 00:14:25.789 "uuid": "fbbefe1a-4f00-4253-a06b-7444d3b61de9", 00:14:25.789 "is_configured": true, 00:14:25.789 "data_offset": 0, 00:14:25.789 "data_size": 65536 00:14:25.789 } 00:14:25.789 ] 00:14:25.789 }' 00:14:25.790 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.790 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.049 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.050 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:26.050 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.050 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.050 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 513d1788-1c85-4089-8570-50d642e8411f 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.310 [2024-10-13 02:28:44.804301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:26.310 [2024-10-13 02:28:44.804348] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:26.310 [2024-10-13 02:28:44.804359] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:26.310 [2024-10-13 02:28:44.804594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:14:26.310 [2024-10-13 02:28:44.805011] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:26.310 [2024-10-13 02:28:44.805028] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:14:26.310 [2024-10-13 02:28:44.805218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.310 NewBaseBdev 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.310 [ 00:14:26.310 { 00:14:26.310 "name": "NewBaseBdev", 00:14:26.310 "aliases": [ 00:14:26.310 "513d1788-1c85-4089-8570-50d642e8411f" 00:14:26.310 ], 00:14:26.310 "product_name": "Malloc disk", 00:14:26.310 "block_size": 512, 00:14:26.310 "num_blocks": 65536, 00:14:26.310 "uuid": "513d1788-1c85-4089-8570-50d642e8411f", 00:14:26.310 "assigned_rate_limits": { 00:14:26.310 "rw_ios_per_sec": 0, 00:14:26.310 "rw_mbytes_per_sec": 0, 00:14:26.310 "r_mbytes_per_sec": 0, 00:14:26.310 "w_mbytes_per_sec": 0 00:14:26.310 }, 00:14:26.310 "claimed": true, 00:14:26.310 "claim_type": "exclusive_write", 00:14:26.310 "zoned": false, 00:14:26.310 "supported_io_types": { 00:14:26.310 "read": true, 00:14:26.310 "write": true, 00:14:26.310 "unmap": true, 00:14:26.310 "flush": true, 00:14:26.310 "reset": true, 00:14:26.310 "nvme_admin": false, 00:14:26.310 "nvme_io": false, 00:14:26.310 "nvme_io_md": false, 00:14:26.310 "write_zeroes": true, 00:14:26.310 "zcopy": true, 00:14:26.310 "get_zone_info": false, 00:14:26.310 "zone_management": false, 00:14:26.310 "zone_append": false, 00:14:26.310 "compare": false, 00:14:26.310 "compare_and_write": false, 00:14:26.310 "abort": true, 00:14:26.310 "seek_hole": false, 00:14:26.310 "seek_data": false, 00:14:26.310 "copy": true, 00:14:26.310 "nvme_iov_md": false 00:14:26.310 }, 00:14:26.310 "memory_domains": [ 00:14:26.310 { 00:14:26.310 "dma_device_id": "system", 00:14:26.310 "dma_device_type": 1 00:14:26.310 }, 00:14:26.310 { 00:14:26.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.310 "dma_device_type": 2 00:14:26.310 } 00:14:26.310 ], 00:14:26.310 "driver_specific": {} 00:14:26.310 } 00:14:26.310 ] 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.310 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:26.311 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:26.311 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.311 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.311 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.311 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.311 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.311 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.311 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.311 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.311 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.311 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.311 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.311 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.311 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.311 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.311 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.311 "name": "Existed_Raid", 00:14:26.311 "uuid": "6300fe19-ec82-42af-beaa-2719d6f747bf", 00:14:26.311 "strip_size_kb": 64, 00:14:26.311 "state": "online", 00:14:26.311 "raid_level": "raid5f", 00:14:26.311 "superblock": false, 00:14:26.311 "num_base_bdevs": 3, 00:14:26.311 "num_base_bdevs_discovered": 3, 00:14:26.311 "num_base_bdevs_operational": 3, 00:14:26.311 "base_bdevs_list": [ 00:14:26.311 { 00:14:26.311 "name": "NewBaseBdev", 00:14:26.311 "uuid": "513d1788-1c85-4089-8570-50d642e8411f", 00:14:26.311 "is_configured": true, 00:14:26.311 "data_offset": 0, 00:14:26.311 "data_size": 65536 00:14:26.311 }, 00:14:26.311 { 00:14:26.311 "name": "BaseBdev2", 00:14:26.311 "uuid": "e1caf8f7-435a-4766-bb07-989ae0ddfd01", 00:14:26.311 "is_configured": true, 00:14:26.311 "data_offset": 0, 00:14:26.311 "data_size": 65536 00:14:26.311 }, 00:14:26.311 { 00:14:26.311 "name": "BaseBdev3", 00:14:26.311 "uuid": "fbbefe1a-4f00-4253-a06b-7444d3b61de9", 00:14:26.311 "is_configured": true, 00:14:26.311 "data_offset": 0, 00:14:26.311 "data_size": 65536 00:14:26.311 } 00:14:26.311 ] 00:14:26.311 }' 00:14:26.311 02:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.311 02:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.881 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:26.881 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:26.881 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:26.881 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:26.881 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:26.881 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:26.881 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:26.881 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:26.881 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.881 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.881 [2024-10-13 02:28:45.295855] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.881 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.881 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:26.881 "name": "Existed_Raid", 00:14:26.881 "aliases": [ 00:14:26.881 "6300fe19-ec82-42af-beaa-2719d6f747bf" 00:14:26.882 ], 00:14:26.882 "product_name": "Raid Volume", 00:14:26.882 "block_size": 512, 00:14:26.882 "num_blocks": 131072, 00:14:26.882 "uuid": "6300fe19-ec82-42af-beaa-2719d6f747bf", 00:14:26.882 "assigned_rate_limits": { 00:14:26.882 "rw_ios_per_sec": 0, 00:14:26.882 "rw_mbytes_per_sec": 0, 00:14:26.882 "r_mbytes_per_sec": 0, 00:14:26.882 "w_mbytes_per_sec": 0 00:14:26.882 }, 00:14:26.882 "claimed": false, 00:14:26.882 "zoned": false, 00:14:26.882 "supported_io_types": { 00:14:26.882 "read": true, 00:14:26.882 "write": true, 00:14:26.882 "unmap": false, 00:14:26.882 "flush": false, 00:14:26.882 "reset": true, 00:14:26.882 "nvme_admin": false, 00:14:26.882 "nvme_io": false, 00:14:26.882 "nvme_io_md": false, 00:14:26.882 "write_zeroes": true, 00:14:26.882 "zcopy": false, 00:14:26.882 "get_zone_info": false, 00:14:26.882 "zone_management": false, 00:14:26.882 "zone_append": false, 00:14:26.882 "compare": false, 00:14:26.882 "compare_and_write": false, 00:14:26.882 "abort": false, 00:14:26.882 "seek_hole": false, 00:14:26.882 "seek_data": false, 00:14:26.882 "copy": false, 00:14:26.882 "nvme_iov_md": false 00:14:26.882 }, 00:14:26.882 "driver_specific": { 00:14:26.882 "raid": { 00:14:26.882 "uuid": "6300fe19-ec82-42af-beaa-2719d6f747bf", 00:14:26.882 "strip_size_kb": 64, 00:14:26.882 "state": "online", 00:14:26.882 "raid_level": "raid5f", 00:14:26.882 "superblock": false, 00:14:26.882 "num_base_bdevs": 3, 00:14:26.882 "num_base_bdevs_discovered": 3, 00:14:26.882 "num_base_bdevs_operational": 3, 00:14:26.882 "base_bdevs_list": [ 00:14:26.882 { 00:14:26.882 "name": "NewBaseBdev", 00:14:26.882 "uuid": "513d1788-1c85-4089-8570-50d642e8411f", 00:14:26.882 "is_configured": true, 00:14:26.882 "data_offset": 0, 00:14:26.882 "data_size": 65536 00:14:26.882 }, 00:14:26.882 { 00:14:26.882 "name": "BaseBdev2", 00:14:26.882 "uuid": "e1caf8f7-435a-4766-bb07-989ae0ddfd01", 00:14:26.882 "is_configured": true, 00:14:26.882 "data_offset": 0, 00:14:26.882 "data_size": 65536 00:14:26.882 }, 00:14:26.882 { 00:14:26.882 "name": "BaseBdev3", 00:14:26.882 "uuid": "fbbefe1a-4f00-4253-a06b-7444d3b61de9", 00:14:26.882 "is_configured": true, 00:14:26.882 "data_offset": 0, 00:14:26.882 "data_size": 65536 00:14:26.882 } 00:14:26.882 ] 00:14:26.882 } 00:14:26.882 } 00:14:26.882 }' 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:26.882 BaseBdev2 00:14:26.882 BaseBdev3' 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.882 [2024-10-13 02:28:45.547112] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:26.882 [2024-10-13 02:28:45.547141] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:26.882 [2024-10-13 02:28:45.547211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:26.882 [2024-10-13 02:28:45.547445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:26.882 [2024-10-13 02:28:45.547457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90387 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 90387 ']' 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 90387 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:26.882 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:27.142 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90387 00:14:27.142 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:27.142 killing process with pid 90387 00:14:27.142 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:27.142 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90387' 00:14:27.142 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 90387 00:14:27.142 [2024-10-13 02:28:45.597183] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.142 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 90387 00:14:27.142 [2024-10-13 02:28:45.628596] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:27.403 00:14:27.403 real 0m9.040s 00:14:27.403 user 0m15.414s 00:14:27.403 sys 0m1.975s 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.403 ************************************ 00:14:27.403 END TEST raid5f_state_function_test 00:14:27.403 ************************************ 00:14:27.403 02:28:45 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:27.403 02:28:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:27.403 02:28:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:27.403 02:28:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:27.403 ************************************ 00:14:27.403 START TEST raid5f_state_function_test_sb 00:14:27.403 ************************************ 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=90992 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:27.403 Process raid pid: 90992 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90992' 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 90992 00:14:27.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 90992 ']' 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:27.403 02:28:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.403 [2024-10-13 02:28:46.053576] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:27.403 [2024-10-13 02:28:46.053713] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.663 [2024-10-13 02:28:46.179929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.663 [2024-10-13 02:28:46.225336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.663 [2024-10-13 02:28:46.267963] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.663 [2024-10-13 02:28:46.267998] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.233 [2024-10-13 02:28:46.877539] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:28.233 [2024-10-13 02:28:46.877590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:28.233 [2024-10-13 02:28:46.877601] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:28.233 [2024-10-13 02:28:46.877611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:28.233 [2024-10-13 02:28:46.877617] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:28.233 [2024-10-13 02:28:46.877627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.233 02:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.494 02:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.494 "name": "Existed_Raid", 00:14:28.494 "uuid": "1c25aec0-1911-4322-880f-ad08d1a659ae", 00:14:28.494 "strip_size_kb": 64, 00:14:28.494 "state": "configuring", 00:14:28.494 "raid_level": "raid5f", 00:14:28.494 "superblock": true, 00:14:28.494 "num_base_bdevs": 3, 00:14:28.494 "num_base_bdevs_discovered": 0, 00:14:28.494 "num_base_bdevs_operational": 3, 00:14:28.494 "base_bdevs_list": [ 00:14:28.494 { 00:14:28.494 "name": "BaseBdev1", 00:14:28.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.494 "is_configured": false, 00:14:28.494 "data_offset": 0, 00:14:28.494 "data_size": 0 00:14:28.494 }, 00:14:28.494 { 00:14:28.494 "name": "BaseBdev2", 00:14:28.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.494 "is_configured": false, 00:14:28.494 "data_offset": 0, 00:14:28.494 "data_size": 0 00:14:28.494 }, 00:14:28.494 { 00:14:28.494 "name": "BaseBdev3", 00:14:28.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.494 "is_configured": false, 00:14:28.494 "data_offset": 0, 00:14:28.494 "data_size": 0 00:14:28.494 } 00:14:28.494 ] 00:14:28.494 }' 00:14:28.494 02:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.494 02:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.755 [2024-10-13 02:28:47.352624] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:28.755 [2024-10-13 02:28:47.352720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.755 [2024-10-13 02:28:47.364628] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:28.755 [2024-10-13 02:28:47.364714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:28.755 [2024-10-13 02:28:47.364740] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:28.755 [2024-10-13 02:28:47.364780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:28.755 [2024-10-13 02:28:47.364799] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:28.755 [2024-10-13 02:28:47.364820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.755 [2024-10-13 02:28:47.385508] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:28.755 BaseBdev1 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.755 [ 00:14:28.755 { 00:14:28.755 "name": "BaseBdev1", 00:14:28.755 "aliases": [ 00:14:28.755 "7305c9b8-c4a4-4d4d-966e-160fa8421959" 00:14:28.755 ], 00:14:28.755 "product_name": "Malloc disk", 00:14:28.755 "block_size": 512, 00:14:28.755 "num_blocks": 65536, 00:14:28.755 "uuid": "7305c9b8-c4a4-4d4d-966e-160fa8421959", 00:14:28.755 "assigned_rate_limits": { 00:14:28.755 "rw_ios_per_sec": 0, 00:14:28.755 "rw_mbytes_per_sec": 0, 00:14:28.755 "r_mbytes_per_sec": 0, 00:14:28.755 "w_mbytes_per_sec": 0 00:14:28.755 }, 00:14:28.755 "claimed": true, 00:14:28.755 "claim_type": "exclusive_write", 00:14:28.755 "zoned": false, 00:14:28.755 "supported_io_types": { 00:14:28.755 "read": true, 00:14:28.755 "write": true, 00:14:28.755 "unmap": true, 00:14:28.755 "flush": true, 00:14:28.755 "reset": true, 00:14:28.755 "nvme_admin": false, 00:14:28.755 "nvme_io": false, 00:14:28.755 "nvme_io_md": false, 00:14:28.755 "write_zeroes": true, 00:14:28.755 "zcopy": true, 00:14:28.755 "get_zone_info": false, 00:14:28.755 "zone_management": false, 00:14:28.755 "zone_append": false, 00:14:28.755 "compare": false, 00:14:28.755 "compare_and_write": false, 00:14:28.755 "abort": true, 00:14:28.755 "seek_hole": false, 00:14:28.755 "seek_data": false, 00:14:28.755 "copy": true, 00:14:28.755 "nvme_iov_md": false 00:14:28.755 }, 00:14:28.755 "memory_domains": [ 00:14:28.755 { 00:14:28.755 "dma_device_id": "system", 00:14:28.755 "dma_device_type": 1 00:14:28.755 }, 00:14:28.755 { 00:14:28.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.755 "dma_device_type": 2 00:14:28.755 } 00:14:28.755 ], 00:14:28.755 "driver_specific": {} 00:14:28.755 } 00:14:28.755 ] 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.755 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.015 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.015 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.015 "name": "Existed_Raid", 00:14:29.015 "uuid": "77ba1e81-ab5b-44e2-98be-cad8c2fd8db8", 00:14:29.015 "strip_size_kb": 64, 00:14:29.015 "state": "configuring", 00:14:29.015 "raid_level": "raid5f", 00:14:29.015 "superblock": true, 00:14:29.015 "num_base_bdevs": 3, 00:14:29.015 "num_base_bdevs_discovered": 1, 00:14:29.015 "num_base_bdevs_operational": 3, 00:14:29.015 "base_bdevs_list": [ 00:14:29.015 { 00:14:29.015 "name": "BaseBdev1", 00:14:29.015 "uuid": "7305c9b8-c4a4-4d4d-966e-160fa8421959", 00:14:29.015 "is_configured": true, 00:14:29.015 "data_offset": 2048, 00:14:29.015 "data_size": 63488 00:14:29.015 }, 00:14:29.015 { 00:14:29.015 "name": "BaseBdev2", 00:14:29.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.015 "is_configured": false, 00:14:29.015 "data_offset": 0, 00:14:29.015 "data_size": 0 00:14:29.015 }, 00:14:29.015 { 00:14:29.015 "name": "BaseBdev3", 00:14:29.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.015 "is_configured": false, 00:14:29.015 "data_offset": 0, 00:14:29.015 "data_size": 0 00:14:29.015 } 00:14:29.015 ] 00:14:29.015 }' 00:14:29.015 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.015 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.274 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:29.274 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.274 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.274 [2024-10-13 02:28:47.900638] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:29.274 [2024-10-13 02:28:47.900687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:14:29.274 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.275 [2024-10-13 02:28:47.912669] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:29.275 [2024-10-13 02:28:47.914468] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:29.275 [2024-10-13 02:28:47.914507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:29.275 [2024-10-13 02:28:47.914516] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:29.275 [2024-10-13 02:28:47.914525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.275 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.534 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.534 "name": "Existed_Raid", 00:14:29.534 "uuid": "18947aa6-18af-4736-b06f-107636c6535e", 00:14:29.534 "strip_size_kb": 64, 00:14:29.534 "state": "configuring", 00:14:29.534 "raid_level": "raid5f", 00:14:29.534 "superblock": true, 00:14:29.534 "num_base_bdevs": 3, 00:14:29.534 "num_base_bdevs_discovered": 1, 00:14:29.534 "num_base_bdevs_operational": 3, 00:14:29.534 "base_bdevs_list": [ 00:14:29.534 { 00:14:29.534 "name": "BaseBdev1", 00:14:29.534 "uuid": "7305c9b8-c4a4-4d4d-966e-160fa8421959", 00:14:29.534 "is_configured": true, 00:14:29.534 "data_offset": 2048, 00:14:29.534 "data_size": 63488 00:14:29.534 }, 00:14:29.534 { 00:14:29.534 "name": "BaseBdev2", 00:14:29.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.534 "is_configured": false, 00:14:29.534 "data_offset": 0, 00:14:29.534 "data_size": 0 00:14:29.534 }, 00:14:29.534 { 00:14:29.534 "name": "BaseBdev3", 00:14:29.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.534 "is_configured": false, 00:14:29.534 "data_offset": 0, 00:14:29.534 "data_size": 0 00:14:29.534 } 00:14:29.534 ] 00:14:29.534 }' 00:14:29.534 02:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.534 02:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.793 [2024-10-13 02:28:48.388743] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.793 BaseBdev2 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.793 [ 00:14:29.793 { 00:14:29.793 "name": "BaseBdev2", 00:14:29.793 "aliases": [ 00:14:29.793 "e0eb5d21-881f-4975-8153-2755f670ce94" 00:14:29.793 ], 00:14:29.793 "product_name": "Malloc disk", 00:14:29.793 "block_size": 512, 00:14:29.793 "num_blocks": 65536, 00:14:29.793 "uuid": "e0eb5d21-881f-4975-8153-2755f670ce94", 00:14:29.793 "assigned_rate_limits": { 00:14:29.793 "rw_ios_per_sec": 0, 00:14:29.793 "rw_mbytes_per_sec": 0, 00:14:29.793 "r_mbytes_per_sec": 0, 00:14:29.793 "w_mbytes_per_sec": 0 00:14:29.793 }, 00:14:29.793 "claimed": true, 00:14:29.793 "claim_type": "exclusive_write", 00:14:29.793 "zoned": false, 00:14:29.793 "supported_io_types": { 00:14:29.793 "read": true, 00:14:29.793 "write": true, 00:14:29.793 "unmap": true, 00:14:29.793 "flush": true, 00:14:29.793 "reset": true, 00:14:29.793 "nvme_admin": false, 00:14:29.793 "nvme_io": false, 00:14:29.793 "nvme_io_md": false, 00:14:29.793 "write_zeroes": true, 00:14:29.793 "zcopy": true, 00:14:29.793 "get_zone_info": false, 00:14:29.793 "zone_management": false, 00:14:29.793 "zone_append": false, 00:14:29.793 "compare": false, 00:14:29.793 "compare_and_write": false, 00:14:29.793 "abort": true, 00:14:29.793 "seek_hole": false, 00:14:29.793 "seek_data": false, 00:14:29.793 "copy": true, 00:14:29.793 "nvme_iov_md": false 00:14:29.793 }, 00:14:29.793 "memory_domains": [ 00:14:29.793 { 00:14:29.793 "dma_device_id": "system", 00:14:29.793 "dma_device_type": 1 00:14:29.793 }, 00:14:29.793 { 00:14:29.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.793 "dma_device_type": 2 00:14:29.793 } 00:14:29.793 ], 00:14:29.793 "driver_specific": {} 00:14:29.793 } 00:14:29.793 ] 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.793 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.053 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.053 "name": "Existed_Raid", 00:14:30.053 "uuid": "18947aa6-18af-4736-b06f-107636c6535e", 00:14:30.053 "strip_size_kb": 64, 00:14:30.053 "state": "configuring", 00:14:30.053 "raid_level": "raid5f", 00:14:30.053 "superblock": true, 00:14:30.053 "num_base_bdevs": 3, 00:14:30.053 "num_base_bdevs_discovered": 2, 00:14:30.053 "num_base_bdevs_operational": 3, 00:14:30.053 "base_bdevs_list": [ 00:14:30.053 { 00:14:30.053 "name": "BaseBdev1", 00:14:30.053 "uuid": "7305c9b8-c4a4-4d4d-966e-160fa8421959", 00:14:30.053 "is_configured": true, 00:14:30.053 "data_offset": 2048, 00:14:30.053 "data_size": 63488 00:14:30.053 }, 00:14:30.053 { 00:14:30.053 "name": "BaseBdev2", 00:14:30.053 "uuid": "e0eb5d21-881f-4975-8153-2755f670ce94", 00:14:30.053 "is_configured": true, 00:14:30.053 "data_offset": 2048, 00:14:30.053 "data_size": 63488 00:14:30.053 }, 00:14:30.053 { 00:14:30.053 "name": "BaseBdev3", 00:14:30.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.053 "is_configured": false, 00:14:30.053 "data_offset": 0, 00:14:30.053 "data_size": 0 00:14:30.053 } 00:14:30.053 ] 00:14:30.053 }' 00:14:30.053 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.053 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.313 [2024-10-13 02:28:48.854668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:30.313 [2024-10-13 02:28:48.854963] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:30.313 [2024-10-13 02:28:48.855021] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:30.313 [2024-10-13 02:28:48.855302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:14:30.313 BaseBdev3 00:14:30.313 [2024-10-13 02:28:48.855767] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:30.313 [2024-10-13 02:28:48.855828] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:14:30.313 [2024-10-13 02:28:48.856039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.313 [ 00:14:30.313 { 00:14:30.313 "name": "BaseBdev3", 00:14:30.313 "aliases": [ 00:14:30.313 "aaef3714-1751-454a-8a7b-d4c3b042f413" 00:14:30.313 ], 00:14:30.313 "product_name": "Malloc disk", 00:14:30.313 "block_size": 512, 00:14:30.313 "num_blocks": 65536, 00:14:30.313 "uuid": "aaef3714-1751-454a-8a7b-d4c3b042f413", 00:14:30.313 "assigned_rate_limits": { 00:14:30.313 "rw_ios_per_sec": 0, 00:14:30.313 "rw_mbytes_per_sec": 0, 00:14:30.313 "r_mbytes_per_sec": 0, 00:14:30.313 "w_mbytes_per_sec": 0 00:14:30.313 }, 00:14:30.313 "claimed": true, 00:14:30.313 "claim_type": "exclusive_write", 00:14:30.313 "zoned": false, 00:14:30.313 "supported_io_types": { 00:14:30.313 "read": true, 00:14:30.313 "write": true, 00:14:30.313 "unmap": true, 00:14:30.313 "flush": true, 00:14:30.313 "reset": true, 00:14:30.313 "nvme_admin": false, 00:14:30.313 "nvme_io": false, 00:14:30.313 "nvme_io_md": false, 00:14:30.313 "write_zeroes": true, 00:14:30.313 "zcopy": true, 00:14:30.313 "get_zone_info": false, 00:14:30.313 "zone_management": false, 00:14:30.313 "zone_append": false, 00:14:30.313 "compare": false, 00:14:30.313 "compare_and_write": false, 00:14:30.313 "abort": true, 00:14:30.313 "seek_hole": false, 00:14:30.313 "seek_data": false, 00:14:30.313 "copy": true, 00:14:30.313 "nvme_iov_md": false 00:14:30.313 }, 00:14:30.313 "memory_domains": [ 00:14:30.313 { 00:14:30.313 "dma_device_id": "system", 00:14:30.313 "dma_device_type": 1 00:14:30.313 }, 00:14:30.313 { 00:14:30.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.313 "dma_device_type": 2 00:14:30.313 } 00:14:30.313 ], 00:14:30.313 "driver_specific": {} 00:14:30.313 } 00:14:30.313 ] 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.313 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.313 "name": "Existed_Raid", 00:14:30.314 "uuid": "18947aa6-18af-4736-b06f-107636c6535e", 00:14:30.314 "strip_size_kb": 64, 00:14:30.314 "state": "online", 00:14:30.314 "raid_level": "raid5f", 00:14:30.314 "superblock": true, 00:14:30.314 "num_base_bdevs": 3, 00:14:30.314 "num_base_bdevs_discovered": 3, 00:14:30.314 "num_base_bdevs_operational": 3, 00:14:30.314 "base_bdevs_list": [ 00:14:30.314 { 00:14:30.314 "name": "BaseBdev1", 00:14:30.314 "uuid": "7305c9b8-c4a4-4d4d-966e-160fa8421959", 00:14:30.314 "is_configured": true, 00:14:30.314 "data_offset": 2048, 00:14:30.314 "data_size": 63488 00:14:30.314 }, 00:14:30.314 { 00:14:30.314 "name": "BaseBdev2", 00:14:30.314 "uuid": "e0eb5d21-881f-4975-8153-2755f670ce94", 00:14:30.314 "is_configured": true, 00:14:30.314 "data_offset": 2048, 00:14:30.314 "data_size": 63488 00:14:30.314 }, 00:14:30.314 { 00:14:30.314 "name": "BaseBdev3", 00:14:30.314 "uuid": "aaef3714-1751-454a-8a7b-d4c3b042f413", 00:14:30.314 "is_configured": true, 00:14:30.314 "data_offset": 2048, 00:14:30.314 "data_size": 63488 00:14:30.314 } 00:14:30.314 ] 00:14:30.314 }' 00:14:30.314 02:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.314 02:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.883 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:30.883 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:30.883 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:30.883 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:30.883 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:30.883 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:30.883 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:30.883 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:30.883 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.883 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.883 [2024-10-13 02:28:49.302107] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.883 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.883 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:30.883 "name": "Existed_Raid", 00:14:30.883 "aliases": [ 00:14:30.883 "18947aa6-18af-4736-b06f-107636c6535e" 00:14:30.883 ], 00:14:30.883 "product_name": "Raid Volume", 00:14:30.883 "block_size": 512, 00:14:30.883 "num_blocks": 126976, 00:14:30.883 "uuid": "18947aa6-18af-4736-b06f-107636c6535e", 00:14:30.883 "assigned_rate_limits": { 00:14:30.883 "rw_ios_per_sec": 0, 00:14:30.883 "rw_mbytes_per_sec": 0, 00:14:30.883 "r_mbytes_per_sec": 0, 00:14:30.883 "w_mbytes_per_sec": 0 00:14:30.883 }, 00:14:30.883 "claimed": false, 00:14:30.883 "zoned": false, 00:14:30.883 "supported_io_types": { 00:14:30.883 "read": true, 00:14:30.883 "write": true, 00:14:30.883 "unmap": false, 00:14:30.883 "flush": false, 00:14:30.883 "reset": true, 00:14:30.883 "nvme_admin": false, 00:14:30.883 "nvme_io": false, 00:14:30.883 "nvme_io_md": false, 00:14:30.883 "write_zeroes": true, 00:14:30.883 "zcopy": false, 00:14:30.883 "get_zone_info": false, 00:14:30.883 "zone_management": false, 00:14:30.883 "zone_append": false, 00:14:30.883 "compare": false, 00:14:30.883 "compare_and_write": false, 00:14:30.883 "abort": false, 00:14:30.883 "seek_hole": false, 00:14:30.883 "seek_data": false, 00:14:30.883 "copy": false, 00:14:30.883 "nvme_iov_md": false 00:14:30.884 }, 00:14:30.884 "driver_specific": { 00:14:30.884 "raid": { 00:14:30.884 "uuid": "18947aa6-18af-4736-b06f-107636c6535e", 00:14:30.884 "strip_size_kb": 64, 00:14:30.884 "state": "online", 00:14:30.884 "raid_level": "raid5f", 00:14:30.884 "superblock": true, 00:14:30.884 "num_base_bdevs": 3, 00:14:30.884 "num_base_bdevs_discovered": 3, 00:14:30.884 "num_base_bdevs_operational": 3, 00:14:30.884 "base_bdevs_list": [ 00:14:30.884 { 00:14:30.884 "name": "BaseBdev1", 00:14:30.884 "uuid": "7305c9b8-c4a4-4d4d-966e-160fa8421959", 00:14:30.884 "is_configured": true, 00:14:30.884 "data_offset": 2048, 00:14:30.884 "data_size": 63488 00:14:30.884 }, 00:14:30.884 { 00:14:30.884 "name": "BaseBdev2", 00:14:30.884 "uuid": "e0eb5d21-881f-4975-8153-2755f670ce94", 00:14:30.884 "is_configured": true, 00:14:30.884 "data_offset": 2048, 00:14:30.884 "data_size": 63488 00:14:30.884 }, 00:14:30.884 { 00:14:30.884 "name": "BaseBdev3", 00:14:30.884 "uuid": "aaef3714-1751-454a-8a7b-d4c3b042f413", 00:14:30.884 "is_configured": true, 00:14:30.884 "data_offset": 2048, 00:14:30.884 "data_size": 63488 00:14:30.884 } 00:14:30.884 ] 00:14:30.884 } 00:14:30.884 } 00:14:30.884 }' 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:30.884 BaseBdev2 00:14:30.884 BaseBdev3' 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.884 [2024-10-13 02:28:49.549530] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:30.884 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.144 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.144 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.144 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.144 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.144 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.144 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.144 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.144 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.144 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.144 "name": "Existed_Raid", 00:14:31.144 "uuid": "18947aa6-18af-4736-b06f-107636c6535e", 00:14:31.144 "strip_size_kb": 64, 00:14:31.144 "state": "online", 00:14:31.144 "raid_level": "raid5f", 00:14:31.144 "superblock": true, 00:14:31.144 "num_base_bdevs": 3, 00:14:31.144 "num_base_bdevs_discovered": 2, 00:14:31.144 "num_base_bdevs_operational": 2, 00:14:31.144 "base_bdevs_list": [ 00:14:31.144 { 00:14:31.144 "name": null, 00:14:31.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.144 "is_configured": false, 00:14:31.144 "data_offset": 0, 00:14:31.144 "data_size": 63488 00:14:31.144 }, 00:14:31.144 { 00:14:31.144 "name": "BaseBdev2", 00:14:31.144 "uuid": "e0eb5d21-881f-4975-8153-2755f670ce94", 00:14:31.144 "is_configured": true, 00:14:31.144 "data_offset": 2048, 00:14:31.144 "data_size": 63488 00:14:31.144 }, 00:14:31.144 { 00:14:31.144 "name": "BaseBdev3", 00:14:31.144 "uuid": "aaef3714-1751-454a-8a7b-d4c3b042f413", 00:14:31.144 "is_configured": true, 00:14:31.144 "data_offset": 2048, 00:14:31.144 "data_size": 63488 00:14:31.144 } 00:14:31.144 ] 00:14:31.144 }' 00:14:31.144 02:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.144 02:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.404 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:31.404 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:31.404 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.404 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:31.404 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.404 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.404 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.404 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:31.404 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:31.404 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:31.404 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.404 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.404 [2024-10-13 02:28:50.083757] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:31.404 [2024-10-13 02:28:50.083913] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.664 [2024-10-13 02:28:50.095078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.664 [2024-10-13 02:28:50.155007] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:31.664 [2024-10-13 02:28:50.155102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:31.664 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.665 BaseBdev2 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.665 [ 00:14:31.665 { 00:14:31.665 "name": "BaseBdev2", 00:14:31.665 "aliases": [ 00:14:31.665 "3a0af329-72bd-4bc1-ab6f-a875eb8f9fd4" 00:14:31.665 ], 00:14:31.665 "product_name": "Malloc disk", 00:14:31.665 "block_size": 512, 00:14:31.665 "num_blocks": 65536, 00:14:31.665 "uuid": "3a0af329-72bd-4bc1-ab6f-a875eb8f9fd4", 00:14:31.665 "assigned_rate_limits": { 00:14:31.665 "rw_ios_per_sec": 0, 00:14:31.665 "rw_mbytes_per_sec": 0, 00:14:31.665 "r_mbytes_per_sec": 0, 00:14:31.665 "w_mbytes_per_sec": 0 00:14:31.665 }, 00:14:31.665 "claimed": false, 00:14:31.665 "zoned": false, 00:14:31.665 "supported_io_types": { 00:14:31.665 "read": true, 00:14:31.665 "write": true, 00:14:31.665 "unmap": true, 00:14:31.665 "flush": true, 00:14:31.665 "reset": true, 00:14:31.665 "nvme_admin": false, 00:14:31.665 "nvme_io": false, 00:14:31.665 "nvme_io_md": false, 00:14:31.665 "write_zeroes": true, 00:14:31.665 "zcopy": true, 00:14:31.665 "get_zone_info": false, 00:14:31.665 "zone_management": false, 00:14:31.665 "zone_append": false, 00:14:31.665 "compare": false, 00:14:31.665 "compare_and_write": false, 00:14:31.665 "abort": true, 00:14:31.665 "seek_hole": false, 00:14:31.665 "seek_data": false, 00:14:31.665 "copy": true, 00:14:31.665 "nvme_iov_md": false 00:14:31.665 }, 00:14:31.665 "memory_domains": [ 00:14:31.665 { 00:14:31.665 "dma_device_id": "system", 00:14:31.665 "dma_device_type": 1 00:14:31.665 }, 00:14:31.665 { 00:14:31.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.665 "dma_device_type": 2 00:14:31.665 } 00:14:31.665 ], 00:14:31.665 "driver_specific": {} 00:14:31.665 } 00:14:31.665 ] 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.665 BaseBdev3 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.665 [ 00:14:31.665 { 00:14:31.665 "name": "BaseBdev3", 00:14:31.665 "aliases": [ 00:14:31.665 "11cdaf66-7a01-4e70-9fd9-707800d28079" 00:14:31.665 ], 00:14:31.665 "product_name": "Malloc disk", 00:14:31.665 "block_size": 512, 00:14:31.665 "num_blocks": 65536, 00:14:31.665 "uuid": "11cdaf66-7a01-4e70-9fd9-707800d28079", 00:14:31.665 "assigned_rate_limits": { 00:14:31.665 "rw_ios_per_sec": 0, 00:14:31.665 "rw_mbytes_per_sec": 0, 00:14:31.665 "r_mbytes_per_sec": 0, 00:14:31.665 "w_mbytes_per_sec": 0 00:14:31.665 }, 00:14:31.665 "claimed": false, 00:14:31.665 "zoned": false, 00:14:31.665 "supported_io_types": { 00:14:31.665 "read": true, 00:14:31.665 "write": true, 00:14:31.665 "unmap": true, 00:14:31.665 "flush": true, 00:14:31.665 "reset": true, 00:14:31.665 "nvme_admin": false, 00:14:31.665 "nvme_io": false, 00:14:31.665 "nvme_io_md": false, 00:14:31.665 "write_zeroes": true, 00:14:31.665 "zcopy": true, 00:14:31.665 "get_zone_info": false, 00:14:31.665 "zone_management": false, 00:14:31.665 "zone_append": false, 00:14:31.665 "compare": false, 00:14:31.665 "compare_and_write": false, 00:14:31.665 "abort": true, 00:14:31.665 "seek_hole": false, 00:14:31.665 "seek_data": false, 00:14:31.665 "copy": true, 00:14:31.665 "nvme_iov_md": false 00:14:31.665 }, 00:14:31.665 "memory_domains": [ 00:14:31.665 { 00:14:31.665 "dma_device_id": "system", 00:14:31.665 "dma_device_type": 1 00:14:31.665 }, 00:14:31.665 { 00:14:31.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.665 "dma_device_type": 2 00:14:31.665 } 00:14:31.665 ], 00:14:31.665 "driver_specific": {} 00:14:31.665 } 00:14:31.665 ] 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.665 [2024-10-13 02:28:50.313729] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:31.665 [2024-10-13 02:28:50.313837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:31.665 [2024-10-13 02:28:50.313887] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:31.665 [2024-10-13 02:28:50.315733] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.665 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.925 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.925 "name": "Existed_Raid", 00:14:31.925 "uuid": "bebd7f2e-0c72-4b00-82d0-9c12027d4e68", 00:14:31.925 "strip_size_kb": 64, 00:14:31.925 "state": "configuring", 00:14:31.925 "raid_level": "raid5f", 00:14:31.925 "superblock": true, 00:14:31.925 "num_base_bdevs": 3, 00:14:31.925 "num_base_bdevs_discovered": 2, 00:14:31.925 "num_base_bdevs_operational": 3, 00:14:31.925 "base_bdevs_list": [ 00:14:31.925 { 00:14:31.925 "name": "BaseBdev1", 00:14:31.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.926 "is_configured": false, 00:14:31.926 "data_offset": 0, 00:14:31.926 "data_size": 0 00:14:31.926 }, 00:14:31.926 { 00:14:31.926 "name": "BaseBdev2", 00:14:31.926 "uuid": "3a0af329-72bd-4bc1-ab6f-a875eb8f9fd4", 00:14:31.926 "is_configured": true, 00:14:31.926 "data_offset": 2048, 00:14:31.926 "data_size": 63488 00:14:31.926 }, 00:14:31.926 { 00:14:31.926 "name": "BaseBdev3", 00:14:31.926 "uuid": "11cdaf66-7a01-4e70-9fd9-707800d28079", 00:14:31.926 "is_configured": true, 00:14:31.926 "data_offset": 2048, 00:14:31.926 "data_size": 63488 00:14:31.926 } 00:14:31.926 ] 00:14:31.926 }' 00:14:31.926 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.926 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.186 [2024-10-13 02:28:50.748950] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.186 "name": "Existed_Raid", 00:14:32.186 "uuid": "bebd7f2e-0c72-4b00-82d0-9c12027d4e68", 00:14:32.186 "strip_size_kb": 64, 00:14:32.186 "state": "configuring", 00:14:32.186 "raid_level": "raid5f", 00:14:32.186 "superblock": true, 00:14:32.186 "num_base_bdevs": 3, 00:14:32.186 "num_base_bdevs_discovered": 1, 00:14:32.186 "num_base_bdevs_operational": 3, 00:14:32.186 "base_bdevs_list": [ 00:14:32.186 { 00:14:32.186 "name": "BaseBdev1", 00:14:32.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.186 "is_configured": false, 00:14:32.186 "data_offset": 0, 00:14:32.186 "data_size": 0 00:14:32.186 }, 00:14:32.186 { 00:14:32.186 "name": null, 00:14:32.186 "uuid": "3a0af329-72bd-4bc1-ab6f-a875eb8f9fd4", 00:14:32.186 "is_configured": false, 00:14:32.186 "data_offset": 0, 00:14:32.186 "data_size": 63488 00:14:32.186 }, 00:14:32.186 { 00:14:32.186 "name": "BaseBdev3", 00:14:32.186 "uuid": "11cdaf66-7a01-4e70-9fd9-707800d28079", 00:14:32.186 "is_configured": true, 00:14:32.186 "data_offset": 2048, 00:14:32.186 "data_size": 63488 00:14:32.186 } 00:14:32.186 ] 00:14:32.186 }' 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.186 02:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.756 [2024-10-13 02:28:51.203073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.756 BaseBdev1 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.756 [ 00:14:32.756 { 00:14:32.756 "name": "BaseBdev1", 00:14:32.756 "aliases": [ 00:14:32.756 "dd1d04bc-0f23-4d20-9f86-d694baccb1c4" 00:14:32.756 ], 00:14:32.756 "product_name": "Malloc disk", 00:14:32.756 "block_size": 512, 00:14:32.756 "num_blocks": 65536, 00:14:32.756 "uuid": "dd1d04bc-0f23-4d20-9f86-d694baccb1c4", 00:14:32.756 "assigned_rate_limits": { 00:14:32.756 "rw_ios_per_sec": 0, 00:14:32.756 "rw_mbytes_per_sec": 0, 00:14:32.756 "r_mbytes_per_sec": 0, 00:14:32.756 "w_mbytes_per_sec": 0 00:14:32.756 }, 00:14:32.756 "claimed": true, 00:14:32.756 "claim_type": "exclusive_write", 00:14:32.756 "zoned": false, 00:14:32.756 "supported_io_types": { 00:14:32.756 "read": true, 00:14:32.756 "write": true, 00:14:32.756 "unmap": true, 00:14:32.756 "flush": true, 00:14:32.756 "reset": true, 00:14:32.756 "nvme_admin": false, 00:14:32.756 "nvme_io": false, 00:14:32.756 "nvme_io_md": false, 00:14:32.756 "write_zeroes": true, 00:14:32.756 "zcopy": true, 00:14:32.756 "get_zone_info": false, 00:14:32.756 "zone_management": false, 00:14:32.756 "zone_append": false, 00:14:32.756 "compare": false, 00:14:32.756 "compare_and_write": false, 00:14:32.756 "abort": true, 00:14:32.756 "seek_hole": false, 00:14:32.756 "seek_data": false, 00:14:32.756 "copy": true, 00:14:32.756 "nvme_iov_md": false 00:14:32.756 }, 00:14:32.756 "memory_domains": [ 00:14:32.756 { 00:14:32.756 "dma_device_id": "system", 00:14:32.756 "dma_device_type": 1 00:14:32.756 }, 00:14:32.756 { 00:14:32.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.756 "dma_device_type": 2 00:14:32.756 } 00:14:32.756 ], 00:14:32.756 "driver_specific": {} 00:14:32.756 } 00:14:32.756 ] 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.756 "name": "Existed_Raid", 00:14:32.756 "uuid": "bebd7f2e-0c72-4b00-82d0-9c12027d4e68", 00:14:32.756 "strip_size_kb": 64, 00:14:32.756 "state": "configuring", 00:14:32.756 "raid_level": "raid5f", 00:14:32.756 "superblock": true, 00:14:32.756 "num_base_bdevs": 3, 00:14:32.756 "num_base_bdevs_discovered": 2, 00:14:32.756 "num_base_bdevs_operational": 3, 00:14:32.756 "base_bdevs_list": [ 00:14:32.756 { 00:14:32.756 "name": "BaseBdev1", 00:14:32.756 "uuid": "dd1d04bc-0f23-4d20-9f86-d694baccb1c4", 00:14:32.756 "is_configured": true, 00:14:32.756 "data_offset": 2048, 00:14:32.756 "data_size": 63488 00:14:32.756 }, 00:14:32.756 { 00:14:32.756 "name": null, 00:14:32.756 "uuid": "3a0af329-72bd-4bc1-ab6f-a875eb8f9fd4", 00:14:32.756 "is_configured": false, 00:14:32.756 "data_offset": 0, 00:14:32.756 "data_size": 63488 00:14:32.756 }, 00:14:32.756 { 00:14:32.756 "name": "BaseBdev3", 00:14:32.756 "uuid": "11cdaf66-7a01-4e70-9fd9-707800d28079", 00:14:32.756 "is_configured": true, 00:14:32.756 "data_offset": 2048, 00:14:32.756 "data_size": 63488 00:14:32.756 } 00:14:32.756 ] 00:14:32.756 }' 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.756 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.016 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.016 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:33.016 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.016 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.275 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.275 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:33.275 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:33.275 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.275 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.275 [2024-10-13 02:28:51.746173] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:33.275 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.275 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:33.275 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.275 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.275 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.275 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.275 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.275 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.275 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.276 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.276 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.276 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.276 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.276 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.276 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.276 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.276 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.276 "name": "Existed_Raid", 00:14:33.276 "uuid": "bebd7f2e-0c72-4b00-82d0-9c12027d4e68", 00:14:33.276 "strip_size_kb": 64, 00:14:33.276 "state": "configuring", 00:14:33.276 "raid_level": "raid5f", 00:14:33.276 "superblock": true, 00:14:33.276 "num_base_bdevs": 3, 00:14:33.276 "num_base_bdevs_discovered": 1, 00:14:33.276 "num_base_bdevs_operational": 3, 00:14:33.276 "base_bdevs_list": [ 00:14:33.276 { 00:14:33.276 "name": "BaseBdev1", 00:14:33.276 "uuid": "dd1d04bc-0f23-4d20-9f86-d694baccb1c4", 00:14:33.276 "is_configured": true, 00:14:33.276 "data_offset": 2048, 00:14:33.276 "data_size": 63488 00:14:33.276 }, 00:14:33.276 { 00:14:33.276 "name": null, 00:14:33.276 "uuid": "3a0af329-72bd-4bc1-ab6f-a875eb8f9fd4", 00:14:33.276 "is_configured": false, 00:14:33.276 "data_offset": 0, 00:14:33.276 "data_size": 63488 00:14:33.276 }, 00:14:33.276 { 00:14:33.276 "name": null, 00:14:33.276 "uuid": "11cdaf66-7a01-4e70-9fd9-707800d28079", 00:14:33.276 "is_configured": false, 00:14:33.276 "data_offset": 0, 00:14:33.276 "data_size": 63488 00:14:33.276 } 00:14:33.276 ] 00:14:33.276 }' 00:14:33.276 02:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.276 02:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.535 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.536 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:33.536 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.536 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.536 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.795 [2024-10-13 02:28:52.249337] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.795 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.795 "name": "Existed_Raid", 00:14:33.795 "uuid": "bebd7f2e-0c72-4b00-82d0-9c12027d4e68", 00:14:33.796 "strip_size_kb": 64, 00:14:33.796 "state": "configuring", 00:14:33.796 "raid_level": "raid5f", 00:14:33.796 "superblock": true, 00:14:33.796 "num_base_bdevs": 3, 00:14:33.796 "num_base_bdevs_discovered": 2, 00:14:33.796 "num_base_bdevs_operational": 3, 00:14:33.796 "base_bdevs_list": [ 00:14:33.796 { 00:14:33.796 "name": "BaseBdev1", 00:14:33.796 "uuid": "dd1d04bc-0f23-4d20-9f86-d694baccb1c4", 00:14:33.796 "is_configured": true, 00:14:33.796 "data_offset": 2048, 00:14:33.796 "data_size": 63488 00:14:33.796 }, 00:14:33.796 { 00:14:33.796 "name": null, 00:14:33.796 "uuid": "3a0af329-72bd-4bc1-ab6f-a875eb8f9fd4", 00:14:33.796 "is_configured": false, 00:14:33.796 "data_offset": 0, 00:14:33.796 "data_size": 63488 00:14:33.796 }, 00:14:33.796 { 00:14:33.796 "name": "BaseBdev3", 00:14:33.796 "uuid": "11cdaf66-7a01-4e70-9fd9-707800d28079", 00:14:33.796 "is_configured": true, 00:14:33.796 "data_offset": 2048, 00:14:33.796 "data_size": 63488 00:14:33.796 } 00:14:33.796 ] 00:14:33.796 }' 00:14:33.796 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.796 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.055 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:34.055 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.056 [2024-10-13 02:28:52.708645] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.056 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.316 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.316 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.316 "name": "Existed_Raid", 00:14:34.316 "uuid": "bebd7f2e-0c72-4b00-82d0-9c12027d4e68", 00:14:34.316 "strip_size_kb": 64, 00:14:34.316 "state": "configuring", 00:14:34.316 "raid_level": "raid5f", 00:14:34.316 "superblock": true, 00:14:34.316 "num_base_bdevs": 3, 00:14:34.316 "num_base_bdevs_discovered": 1, 00:14:34.316 "num_base_bdevs_operational": 3, 00:14:34.316 "base_bdevs_list": [ 00:14:34.316 { 00:14:34.316 "name": null, 00:14:34.316 "uuid": "dd1d04bc-0f23-4d20-9f86-d694baccb1c4", 00:14:34.316 "is_configured": false, 00:14:34.316 "data_offset": 0, 00:14:34.316 "data_size": 63488 00:14:34.316 }, 00:14:34.316 { 00:14:34.316 "name": null, 00:14:34.316 "uuid": "3a0af329-72bd-4bc1-ab6f-a875eb8f9fd4", 00:14:34.316 "is_configured": false, 00:14:34.316 "data_offset": 0, 00:14:34.316 "data_size": 63488 00:14:34.316 }, 00:14:34.316 { 00:14:34.316 "name": "BaseBdev3", 00:14:34.316 "uuid": "11cdaf66-7a01-4e70-9fd9-707800d28079", 00:14:34.316 "is_configured": true, 00:14:34.316 "data_offset": 2048, 00:14:34.316 "data_size": 63488 00:14:34.316 } 00:14:34.316 ] 00:14:34.316 }' 00:14:34.316 02:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.316 02:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.576 [2024-10-13 02:28:53.214101] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.576 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.837 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.837 "name": "Existed_Raid", 00:14:34.837 "uuid": "bebd7f2e-0c72-4b00-82d0-9c12027d4e68", 00:14:34.837 "strip_size_kb": 64, 00:14:34.837 "state": "configuring", 00:14:34.837 "raid_level": "raid5f", 00:14:34.837 "superblock": true, 00:14:34.837 "num_base_bdevs": 3, 00:14:34.837 "num_base_bdevs_discovered": 2, 00:14:34.837 "num_base_bdevs_operational": 3, 00:14:34.837 "base_bdevs_list": [ 00:14:34.837 { 00:14:34.837 "name": null, 00:14:34.837 "uuid": "dd1d04bc-0f23-4d20-9f86-d694baccb1c4", 00:14:34.837 "is_configured": false, 00:14:34.837 "data_offset": 0, 00:14:34.837 "data_size": 63488 00:14:34.837 }, 00:14:34.837 { 00:14:34.837 "name": "BaseBdev2", 00:14:34.837 "uuid": "3a0af329-72bd-4bc1-ab6f-a875eb8f9fd4", 00:14:34.837 "is_configured": true, 00:14:34.837 "data_offset": 2048, 00:14:34.837 "data_size": 63488 00:14:34.837 }, 00:14:34.837 { 00:14:34.837 "name": "BaseBdev3", 00:14:34.837 "uuid": "11cdaf66-7a01-4e70-9fd9-707800d28079", 00:14:34.837 "is_configured": true, 00:14:34.837 "data_offset": 2048, 00:14:34.837 "data_size": 63488 00:14:34.837 } 00:14:34.837 ] 00:14:34.837 }' 00:14:34.837 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.837 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dd1d04bc-0f23-4d20-9f86-d694baccb1c4 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.097 [2024-10-13 02:28:53.684135] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:35.097 [2024-10-13 02:28:53.684371] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:35.097 [2024-10-13 02:28:53.684426] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:35.097 NewBaseBdev 00:14:35.097 [2024-10-13 02:28:53.684678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:14:35.097 [2024-10-13 02:28:53.685074] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:35.097 [2024-10-13 02:28:53.685091] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:14:35.097 [2024-10-13 02:28:53.685199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.097 [ 00:14:35.097 { 00:14:35.097 "name": "NewBaseBdev", 00:14:35.097 "aliases": [ 00:14:35.097 "dd1d04bc-0f23-4d20-9f86-d694baccb1c4" 00:14:35.097 ], 00:14:35.097 "product_name": "Malloc disk", 00:14:35.097 "block_size": 512, 00:14:35.097 "num_blocks": 65536, 00:14:35.097 "uuid": "dd1d04bc-0f23-4d20-9f86-d694baccb1c4", 00:14:35.097 "assigned_rate_limits": { 00:14:35.097 "rw_ios_per_sec": 0, 00:14:35.097 "rw_mbytes_per_sec": 0, 00:14:35.097 "r_mbytes_per_sec": 0, 00:14:35.097 "w_mbytes_per_sec": 0 00:14:35.097 }, 00:14:35.097 "claimed": true, 00:14:35.097 "claim_type": "exclusive_write", 00:14:35.097 "zoned": false, 00:14:35.097 "supported_io_types": { 00:14:35.097 "read": true, 00:14:35.097 "write": true, 00:14:35.097 "unmap": true, 00:14:35.097 "flush": true, 00:14:35.097 "reset": true, 00:14:35.097 "nvme_admin": false, 00:14:35.097 "nvme_io": false, 00:14:35.097 "nvme_io_md": false, 00:14:35.097 "write_zeroes": true, 00:14:35.097 "zcopy": true, 00:14:35.097 "get_zone_info": false, 00:14:35.097 "zone_management": false, 00:14:35.097 "zone_append": false, 00:14:35.097 "compare": false, 00:14:35.097 "compare_and_write": false, 00:14:35.097 "abort": true, 00:14:35.097 "seek_hole": false, 00:14:35.097 "seek_data": false, 00:14:35.097 "copy": true, 00:14:35.097 "nvme_iov_md": false 00:14:35.097 }, 00:14:35.097 "memory_domains": [ 00:14:35.097 { 00:14:35.097 "dma_device_id": "system", 00:14:35.097 "dma_device_type": 1 00:14:35.097 }, 00:14:35.097 { 00:14:35.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.097 "dma_device_type": 2 00:14:35.097 } 00:14:35.097 ], 00:14:35.097 "driver_specific": {} 00:14:35.097 } 00:14:35.097 ] 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.097 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.097 "name": "Existed_Raid", 00:14:35.097 "uuid": "bebd7f2e-0c72-4b00-82d0-9c12027d4e68", 00:14:35.097 "strip_size_kb": 64, 00:14:35.097 "state": "online", 00:14:35.097 "raid_level": "raid5f", 00:14:35.097 "superblock": true, 00:14:35.097 "num_base_bdevs": 3, 00:14:35.097 "num_base_bdevs_discovered": 3, 00:14:35.097 "num_base_bdevs_operational": 3, 00:14:35.097 "base_bdevs_list": [ 00:14:35.097 { 00:14:35.097 "name": "NewBaseBdev", 00:14:35.098 "uuid": "dd1d04bc-0f23-4d20-9f86-d694baccb1c4", 00:14:35.098 "is_configured": true, 00:14:35.098 "data_offset": 2048, 00:14:35.098 "data_size": 63488 00:14:35.098 }, 00:14:35.098 { 00:14:35.098 "name": "BaseBdev2", 00:14:35.098 "uuid": "3a0af329-72bd-4bc1-ab6f-a875eb8f9fd4", 00:14:35.098 "is_configured": true, 00:14:35.098 "data_offset": 2048, 00:14:35.098 "data_size": 63488 00:14:35.098 }, 00:14:35.098 { 00:14:35.098 "name": "BaseBdev3", 00:14:35.098 "uuid": "11cdaf66-7a01-4e70-9fd9-707800d28079", 00:14:35.098 "is_configured": true, 00:14:35.098 "data_offset": 2048, 00:14:35.098 "data_size": 63488 00:14:35.098 } 00:14:35.098 ] 00:14:35.098 }' 00:14:35.098 02:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.098 02:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:35.668 [2024-10-13 02:28:54.163624] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:35.668 "name": "Existed_Raid", 00:14:35.668 "aliases": [ 00:14:35.668 "bebd7f2e-0c72-4b00-82d0-9c12027d4e68" 00:14:35.668 ], 00:14:35.668 "product_name": "Raid Volume", 00:14:35.668 "block_size": 512, 00:14:35.668 "num_blocks": 126976, 00:14:35.668 "uuid": "bebd7f2e-0c72-4b00-82d0-9c12027d4e68", 00:14:35.668 "assigned_rate_limits": { 00:14:35.668 "rw_ios_per_sec": 0, 00:14:35.668 "rw_mbytes_per_sec": 0, 00:14:35.668 "r_mbytes_per_sec": 0, 00:14:35.668 "w_mbytes_per_sec": 0 00:14:35.668 }, 00:14:35.668 "claimed": false, 00:14:35.668 "zoned": false, 00:14:35.668 "supported_io_types": { 00:14:35.668 "read": true, 00:14:35.668 "write": true, 00:14:35.668 "unmap": false, 00:14:35.668 "flush": false, 00:14:35.668 "reset": true, 00:14:35.668 "nvme_admin": false, 00:14:35.668 "nvme_io": false, 00:14:35.668 "nvme_io_md": false, 00:14:35.668 "write_zeroes": true, 00:14:35.668 "zcopy": false, 00:14:35.668 "get_zone_info": false, 00:14:35.668 "zone_management": false, 00:14:35.668 "zone_append": false, 00:14:35.668 "compare": false, 00:14:35.668 "compare_and_write": false, 00:14:35.668 "abort": false, 00:14:35.668 "seek_hole": false, 00:14:35.668 "seek_data": false, 00:14:35.668 "copy": false, 00:14:35.668 "nvme_iov_md": false 00:14:35.668 }, 00:14:35.668 "driver_specific": { 00:14:35.668 "raid": { 00:14:35.668 "uuid": "bebd7f2e-0c72-4b00-82d0-9c12027d4e68", 00:14:35.668 "strip_size_kb": 64, 00:14:35.668 "state": "online", 00:14:35.668 "raid_level": "raid5f", 00:14:35.668 "superblock": true, 00:14:35.668 "num_base_bdevs": 3, 00:14:35.668 "num_base_bdevs_discovered": 3, 00:14:35.668 "num_base_bdevs_operational": 3, 00:14:35.668 "base_bdevs_list": [ 00:14:35.668 { 00:14:35.668 "name": "NewBaseBdev", 00:14:35.668 "uuid": "dd1d04bc-0f23-4d20-9f86-d694baccb1c4", 00:14:35.668 "is_configured": true, 00:14:35.668 "data_offset": 2048, 00:14:35.668 "data_size": 63488 00:14:35.668 }, 00:14:35.668 { 00:14:35.668 "name": "BaseBdev2", 00:14:35.668 "uuid": "3a0af329-72bd-4bc1-ab6f-a875eb8f9fd4", 00:14:35.668 "is_configured": true, 00:14:35.668 "data_offset": 2048, 00:14:35.668 "data_size": 63488 00:14:35.668 }, 00:14:35.668 { 00:14:35.668 "name": "BaseBdev3", 00:14:35.668 "uuid": "11cdaf66-7a01-4e70-9fd9-707800d28079", 00:14:35.668 "is_configured": true, 00:14:35.668 "data_offset": 2048, 00:14:35.668 "data_size": 63488 00:14:35.668 } 00:14:35.668 ] 00:14:35.668 } 00:14:35.668 } 00:14:35.668 }' 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:35.668 BaseBdev2 00:14:35.668 BaseBdev3' 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.668 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.929 [2024-10-13 02:28:54.403010] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:35.929 [2024-10-13 02:28:54.403034] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.929 [2024-10-13 02:28:54.403093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.929 [2024-10-13 02:28:54.403322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.929 [2024-10-13 02:28:54.403336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 90992 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 90992 ']' 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 90992 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90992 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:35.929 killing process with pid 90992 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90992' 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 90992 00:14:35.929 [2024-10-13 02:28:54.450805] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:35.929 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 90992 00:14:35.929 [2024-10-13 02:28:54.481792] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:36.189 ************************************ 00:14:36.189 END TEST raid5f_state_function_test_sb 00:14:36.189 02:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:36.189 00:14:36.189 real 0m8.768s 00:14:36.189 user 0m14.893s 00:14:36.189 sys 0m1.890s 00:14:36.189 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:36.189 02:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.189 ************************************ 00:14:36.189 02:28:54 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:36.189 02:28:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:36.189 02:28:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:36.189 02:28:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:36.189 ************************************ 00:14:36.189 START TEST raid5f_superblock_test 00:14:36.189 ************************************ 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91597 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91597 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 91597 ']' 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.189 02:28:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:36.190 02:28:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.450 [2024-10-13 02:28:54.899043] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:36.450 [2024-10-13 02:28:54.899247] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91597 ] 00:14:36.450 [2024-10-13 02:28:55.029052] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.450 [2024-10-13 02:28:55.072987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.450 [2024-10-13 02:28:55.115091] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.450 [2024-10-13 02:28:55.115135] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.392 malloc1 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.392 [2024-10-13 02:28:55.745513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:37.392 [2024-10-13 02:28:55.745672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.392 [2024-10-13 02:28:55.745716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:37.392 [2024-10-13 02:28:55.745751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.392 [2024-10-13 02:28:55.747919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.392 [2024-10-13 02:28:55.747997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:37.392 pt1 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.392 malloc2 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.392 [2024-10-13 02:28:55.781758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:37.392 [2024-10-13 02:28:55.781891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.392 [2024-10-13 02:28:55.781927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:37.392 [2024-10-13 02:28:55.781957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.392 [2024-10-13 02:28:55.784104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.392 [2024-10-13 02:28:55.784194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:37.392 pt2 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.392 malloc3 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.392 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.392 [2024-10-13 02:28:55.814478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:37.392 [2024-10-13 02:28:55.814594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.392 [2024-10-13 02:28:55.814628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:37.393 [2024-10-13 02:28:55.814658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.393 [2024-10-13 02:28:55.816751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.393 [2024-10-13 02:28:55.816830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:37.393 pt3 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.393 [2024-10-13 02:28:55.826536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:37.393 [2024-10-13 02:28:55.828294] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:37.393 [2024-10-13 02:28:55.828345] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:37.393 [2024-10-13 02:28:55.828500] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:37.393 [2024-10-13 02:28:55.828512] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:37.393 [2024-10-13 02:28:55.828763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:14:37.393 [2024-10-13 02:28:55.829215] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:37.393 [2024-10-13 02:28:55.829237] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:37.393 [2024-10-13 02:28:55.829371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.393 "name": "raid_bdev1", 00:14:37.393 "uuid": "d8338c1d-de7b-47ed-87ba-1eef1f7ebf14", 00:14:37.393 "strip_size_kb": 64, 00:14:37.393 "state": "online", 00:14:37.393 "raid_level": "raid5f", 00:14:37.393 "superblock": true, 00:14:37.393 "num_base_bdevs": 3, 00:14:37.393 "num_base_bdevs_discovered": 3, 00:14:37.393 "num_base_bdevs_operational": 3, 00:14:37.393 "base_bdevs_list": [ 00:14:37.393 { 00:14:37.393 "name": "pt1", 00:14:37.393 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:37.393 "is_configured": true, 00:14:37.393 "data_offset": 2048, 00:14:37.393 "data_size": 63488 00:14:37.393 }, 00:14:37.393 { 00:14:37.393 "name": "pt2", 00:14:37.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.393 "is_configured": true, 00:14:37.393 "data_offset": 2048, 00:14:37.393 "data_size": 63488 00:14:37.393 }, 00:14:37.393 { 00:14:37.393 "name": "pt3", 00:14:37.393 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:37.393 "is_configured": true, 00:14:37.393 "data_offset": 2048, 00:14:37.393 "data_size": 63488 00:14:37.393 } 00:14:37.393 ] 00:14:37.393 }' 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.393 02:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.653 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:37.653 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:37.653 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:37.653 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:37.653 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:37.653 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:37.653 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:37.653 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:37.653 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.653 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.653 [2024-10-13 02:28:56.258150] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.653 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.653 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:37.653 "name": "raid_bdev1", 00:14:37.653 "aliases": [ 00:14:37.653 "d8338c1d-de7b-47ed-87ba-1eef1f7ebf14" 00:14:37.653 ], 00:14:37.653 "product_name": "Raid Volume", 00:14:37.653 "block_size": 512, 00:14:37.653 "num_blocks": 126976, 00:14:37.653 "uuid": "d8338c1d-de7b-47ed-87ba-1eef1f7ebf14", 00:14:37.653 "assigned_rate_limits": { 00:14:37.653 "rw_ios_per_sec": 0, 00:14:37.653 "rw_mbytes_per_sec": 0, 00:14:37.653 "r_mbytes_per_sec": 0, 00:14:37.653 "w_mbytes_per_sec": 0 00:14:37.653 }, 00:14:37.653 "claimed": false, 00:14:37.653 "zoned": false, 00:14:37.653 "supported_io_types": { 00:14:37.653 "read": true, 00:14:37.653 "write": true, 00:14:37.653 "unmap": false, 00:14:37.653 "flush": false, 00:14:37.653 "reset": true, 00:14:37.653 "nvme_admin": false, 00:14:37.653 "nvme_io": false, 00:14:37.653 "nvme_io_md": false, 00:14:37.653 "write_zeroes": true, 00:14:37.653 "zcopy": false, 00:14:37.653 "get_zone_info": false, 00:14:37.653 "zone_management": false, 00:14:37.653 "zone_append": false, 00:14:37.653 "compare": false, 00:14:37.653 "compare_and_write": false, 00:14:37.653 "abort": false, 00:14:37.653 "seek_hole": false, 00:14:37.653 "seek_data": false, 00:14:37.653 "copy": false, 00:14:37.653 "nvme_iov_md": false 00:14:37.653 }, 00:14:37.653 "driver_specific": { 00:14:37.653 "raid": { 00:14:37.654 "uuid": "d8338c1d-de7b-47ed-87ba-1eef1f7ebf14", 00:14:37.654 "strip_size_kb": 64, 00:14:37.654 "state": "online", 00:14:37.654 "raid_level": "raid5f", 00:14:37.654 "superblock": true, 00:14:37.654 "num_base_bdevs": 3, 00:14:37.654 "num_base_bdevs_discovered": 3, 00:14:37.654 "num_base_bdevs_operational": 3, 00:14:37.654 "base_bdevs_list": [ 00:14:37.654 { 00:14:37.654 "name": "pt1", 00:14:37.654 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:37.654 "is_configured": true, 00:14:37.654 "data_offset": 2048, 00:14:37.654 "data_size": 63488 00:14:37.654 }, 00:14:37.654 { 00:14:37.654 "name": "pt2", 00:14:37.654 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.654 "is_configured": true, 00:14:37.654 "data_offset": 2048, 00:14:37.654 "data_size": 63488 00:14:37.654 }, 00:14:37.654 { 00:14:37.654 "name": "pt3", 00:14:37.654 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:37.654 "is_configured": true, 00:14:37.654 "data_offset": 2048, 00:14:37.654 "data_size": 63488 00:14:37.654 } 00:14:37.654 ] 00:14:37.654 } 00:14:37.654 } 00:14:37.654 }' 00:14:37.654 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:37.654 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:37.654 pt2 00:14:37.654 pt3' 00:14:37.654 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:37.914 [2024-10-13 02:28:56.505674] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d8338c1d-de7b-47ed-87ba-1eef1f7ebf14 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d8338c1d-de7b-47ed-87ba-1eef1f7ebf14 ']' 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.914 [2024-10-13 02:28:56.553439] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.914 [2024-10-13 02:28:56.553504] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.914 [2024-10-13 02:28:56.553593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.914 [2024-10-13 02:28:56.553704] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.914 [2024-10-13 02:28:56.553795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.914 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:38.177 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.178 [2024-10-13 02:28:56.709201] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:38.178 [2024-10-13 02:28:56.711096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:38.178 [2024-10-13 02:28:56.711177] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:38.178 [2024-10-13 02:28:56.711243] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:38.178 [2024-10-13 02:28:56.711329] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:38.178 [2024-10-13 02:28:56.711387] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:38.178 [2024-10-13 02:28:56.711453] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:38.178 [2024-10-13 02:28:56.711491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:14:38.178 request: 00:14:38.178 { 00:14:38.178 "name": "raid_bdev1", 00:14:38.178 "raid_level": "raid5f", 00:14:38.178 "base_bdevs": [ 00:14:38.178 "malloc1", 00:14:38.178 "malloc2", 00:14:38.178 "malloc3" 00:14:38.178 ], 00:14:38.178 "strip_size_kb": 64, 00:14:38.178 "superblock": false, 00:14:38.178 "method": "bdev_raid_create", 00:14:38.178 "req_id": 1 00:14:38.178 } 00:14:38.178 Got JSON-RPC error response 00:14:38.178 response: 00:14:38.178 { 00:14:38.178 "code": -17, 00:14:38.178 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:38.178 } 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.178 [2024-10-13 02:28:56.777068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:38.178 [2024-10-13 02:28:56.777183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.178 [2024-10-13 02:28:56.777217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:38.178 [2024-10-13 02:28:56.777249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.178 [2024-10-13 02:28:56.779440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.178 [2024-10-13 02:28:56.779527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:38.178 [2024-10-13 02:28:56.779627] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:38.178 [2024-10-13 02:28:56.779683] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:38.178 pt1 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.178 "name": "raid_bdev1", 00:14:38.178 "uuid": "d8338c1d-de7b-47ed-87ba-1eef1f7ebf14", 00:14:38.178 "strip_size_kb": 64, 00:14:38.178 "state": "configuring", 00:14:38.178 "raid_level": "raid5f", 00:14:38.178 "superblock": true, 00:14:38.178 "num_base_bdevs": 3, 00:14:38.178 "num_base_bdevs_discovered": 1, 00:14:38.178 "num_base_bdevs_operational": 3, 00:14:38.178 "base_bdevs_list": [ 00:14:38.178 { 00:14:38.178 "name": "pt1", 00:14:38.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.178 "is_configured": true, 00:14:38.178 "data_offset": 2048, 00:14:38.178 "data_size": 63488 00:14:38.178 }, 00:14:38.178 { 00:14:38.178 "name": null, 00:14:38.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.178 "is_configured": false, 00:14:38.178 "data_offset": 2048, 00:14:38.178 "data_size": 63488 00:14:38.178 }, 00:14:38.178 { 00:14:38.178 "name": null, 00:14:38.178 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.178 "is_configured": false, 00:14:38.178 "data_offset": 2048, 00:14:38.178 "data_size": 63488 00:14:38.178 } 00:14:38.178 ] 00:14:38.178 }' 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.178 02:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.771 [2024-10-13 02:28:57.252261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:38.771 [2024-10-13 02:28:57.252336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.771 [2024-10-13 02:28:57.252357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:38.771 [2024-10-13 02:28:57.252370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.771 [2024-10-13 02:28:57.252748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.771 [2024-10-13 02:28:57.252765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:38.771 [2024-10-13 02:28:57.252833] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:38.771 [2024-10-13 02:28:57.252855] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:38.771 pt2 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.771 [2024-10-13 02:28:57.264227] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.771 "name": "raid_bdev1", 00:14:38.771 "uuid": "d8338c1d-de7b-47ed-87ba-1eef1f7ebf14", 00:14:38.771 "strip_size_kb": 64, 00:14:38.771 "state": "configuring", 00:14:38.771 "raid_level": "raid5f", 00:14:38.771 "superblock": true, 00:14:38.771 "num_base_bdevs": 3, 00:14:38.771 "num_base_bdevs_discovered": 1, 00:14:38.771 "num_base_bdevs_operational": 3, 00:14:38.771 "base_bdevs_list": [ 00:14:38.771 { 00:14:38.771 "name": "pt1", 00:14:38.771 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.771 "is_configured": true, 00:14:38.771 "data_offset": 2048, 00:14:38.771 "data_size": 63488 00:14:38.771 }, 00:14:38.771 { 00:14:38.771 "name": null, 00:14:38.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.771 "is_configured": false, 00:14:38.771 "data_offset": 0, 00:14:38.771 "data_size": 63488 00:14:38.771 }, 00:14:38.771 { 00:14:38.771 "name": null, 00:14:38.771 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.771 "is_configured": false, 00:14:38.771 "data_offset": 2048, 00:14:38.771 "data_size": 63488 00:14:38.771 } 00:14:38.771 ] 00:14:38.771 }' 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.771 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.342 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:39.342 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:39.342 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:39.342 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.342 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.342 [2024-10-13 02:28:57.723730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:39.342 [2024-10-13 02:28:57.723859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.342 [2024-10-13 02:28:57.723932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:39.342 [2024-10-13 02:28:57.723964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.342 [2024-10-13 02:28:57.724374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.342 [2024-10-13 02:28:57.724430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:39.342 [2024-10-13 02:28:57.724530] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:39.342 [2024-10-13 02:28:57.724578] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:39.342 pt2 00:14:39.342 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.342 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:39.342 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.343 [2024-10-13 02:28:57.735717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:39.343 [2024-10-13 02:28:57.735819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.343 [2024-10-13 02:28:57.735855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:39.343 [2024-10-13 02:28:57.735896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.343 [2024-10-13 02:28:57.736219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.343 [2024-10-13 02:28:57.736271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:39.343 [2024-10-13 02:28:57.736352] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:39.343 [2024-10-13 02:28:57.736414] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:39.343 [2024-10-13 02:28:57.736540] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:39.343 [2024-10-13 02:28:57.736575] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:39.343 [2024-10-13 02:28:57.736807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:39.343 [2024-10-13 02:28:57.737211] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:39.343 [2024-10-13 02:28:57.737260] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:14:39.343 [2024-10-13 02:28:57.737394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.343 pt3 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.343 "name": "raid_bdev1", 00:14:39.343 "uuid": "d8338c1d-de7b-47ed-87ba-1eef1f7ebf14", 00:14:39.343 "strip_size_kb": 64, 00:14:39.343 "state": "online", 00:14:39.343 "raid_level": "raid5f", 00:14:39.343 "superblock": true, 00:14:39.343 "num_base_bdevs": 3, 00:14:39.343 "num_base_bdevs_discovered": 3, 00:14:39.343 "num_base_bdevs_operational": 3, 00:14:39.343 "base_bdevs_list": [ 00:14:39.343 { 00:14:39.343 "name": "pt1", 00:14:39.343 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:39.343 "is_configured": true, 00:14:39.343 "data_offset": 2048, 00:14:39.343 "data_size": 63488 00:14:39.343 }, 00:14:39.343 { 00:14:39.343 "name": "pt2", 00:14:39.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.343 "is_configured": true, 00:14:39.343 "data_offset": 2048, 00:14:39.343 "data_size": 63488 00:14:39.343 }, 00:14:39.343 { 00:14:39.343 "name": "pt3", 00:14:39.343 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.343 "is_configured": true, 00:14:39.343 "data_offset": 2048, 00:14:39.343 "data_size": 63488 00:14:39.343 } 00:14:39.343 ] 00:14:39.343 }' 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.343 02:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.603 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:39.603 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:39.603 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:39.603 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:39.603 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:39.603 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:39.603 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:39.603 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.603 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.603 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:39.603 [2024-10-13 02:28:58.219213] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.603 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.603 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:39.603 "name": "raid_bdev1", 00:14:39.603 "aliases": [ 00:14:39.603 "d8338c1d-de7b-47ed-87ba-1eef1f7ebf14" 00:14:39.603 ], 00:14:39.603 "product_name": "Raid Volume", 00:14:39.603 "block_size": 512, 00:14:39.603 "num_blocks": 126976, 00:14:39.603 "uuid": "d8338c1d-de7b-47ed-87ba-1eef1f7ebf14", 00:14:39.603 "assigned_rate_limits": { 00:14:39.603 "rw_ios_per_sec": 0, 00:14:39.603 "rw_mbytes_per_sec": 0, 00:14:39.603 "r_mbytes_per_sec": 0, 00:14:39.603 "w_mbytes_per_sec": 0 00:14:39.603 }, 00:14:39.603 "claimed": false, 00:14:39.603 "zoned": false, 00:14:39.603 "supported_io_types": { 00:14:39.603 "read": true, 00:14:39.603 "write": true, 00:14:39.603 "unmap": false, 00:14:39.603 "flush": false, 00:14:39.603 "reset": true, 00:14:39.603 "nvme_admin": false, 00:14:39.603 "nvme_io": false, 00:14:39.603 "nvme_io_md": false, 00:14:39.603 "write_zeroes": true, 00:14:39.603 "zcopy": false, 00:14:39.603 "get_zone_info": false, 00:14:39.603 "zone_management": false, 00:14:39.603 "zone_append": false, 00:14:39.603 "compare": false, 00:14:39.603 "compare_and_write": false, 00:14:39.603 "abort": false, 00:14:39.603 "seek_hole": false, 00:14:39.603 "seek_data": false, 00:14:39.603 "copy": false, 00:14:39.603 "nvme_iov_md": false 00:14:39.603 }, 00:14:39.603 "driver_specific": { 00:14:39.603 "raid": { 00:14:39.603 "uuid": "d8338c1d-de7b-47ed-87ba-1eef1f7ebf14", 00:14:39.603 "strip_size_kb": 64, 00:14:39.603 "state": "online", 00:14:39.603 "raid_level": "raid5f", 00:14:39.603 "superblock": true, 00:14:39.603 "num_base_bdevs": 3, 00:14:39.603 "num_base_bdevs_discovered": 3, 00:14:39.603 "num_base_bdevs_operational": 3, 00:14:39.603 "base_bdevs_list": [ 00:14:39.603 { 00:14:39.603 "name": "pt1", 00:14:39.603 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:39.603 "is_configured": true, 00:14:39.603 "data_offset": 2048, 00:14:39.603 "data_size": 63488 00:14:39.603 }, 00:14:39.603 { 00:14:39.603 "name": "pt2", 00:14:39.603 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.603 "is_configured": true, 00:14:39.603 "data_offset": 2048, 00:14:39.603 "data_size": 63488 00:14:39.603 }, 00:14:39.603 { 00:14:39.603 "name": "pt3", 00:14:39.603 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.603 "is_configured": true, 00:14:39.603 "data_offset": 2048, 00:14:39.603 "data_size": 63488 00:14:39.603 } 00:14:39.603 ] 00:14:39.603 } 00:14:39.603 } 00:14:39.603 }' 00:14:39.603 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:39.863 pt2 00:14:39.863 pt3' 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.863 [2024-10-13 02:28:58.482673] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d8338c1d-de7b-47ed-87ba-1eef1f7ebf14 '!=' d8338c1d-de7b-47ed-87ba-1eef1f7ebf14 ']' 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.863 [2024-10-13 02:28:58.526476] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.863 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.123 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.123 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.123 "name": "raid_bdev1", 00:14:40.123 "uuid": "d8338c1d-de7b-47ed-87ba-1eef1f7ebf14", 00:14:40.123 "strip_size_kb": 64, 00:14:40.123 "state": "online", 00:14:40.123 "raid_level": "raid5f", 00:14:40.123 "superblock": true, 00:14:40.123 "num_base_bdevs": 3, 00:14:40.123 "num_base_bdevs_discovered": 2, 00:14:40.123 "num_base_bdevs_operational": 2, 00:14:40.123 "base_bdevs_list": [ 00:14:40.123 { 00:14:40.123 "name": null, 00:14:40.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.123 "is_configured": false, 00:14:40.123 "data_offset": 0, 00:14:40.123 "data_size": 63488 00:14:40.123 }, 00:14:40.123 { 00:14:40.123 "name": "pt2", 00:14:40.123 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.123 "is_configured": true, 00:14:40.123 "data_offset": 2048, 00:14:40.123 "data_size": 63488 00:14:40.123 }, 00:14:40.123 { 00:14:40.123 "name": "pt3", 00:14:40.123 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.123 "is_configured": true, 00:14:40.123 "data_offset": 2048, 00:14:40.123 "data_size": 63488 00:14:40.123 } 00:14:40.123 ] 00:14:40.123 }' 00:14:40.123 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.123 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.383 02:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:40.383 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.383 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.383 [2024-10-13 02:28:58.993630] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:40.383 [2024-10-13 02:28:58.993714] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.383 [2024-10-13 02:28:58.993806] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.383 [2024-10-13 02:28:58.993881] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.383 [2024-10-13 02:28:58.993891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:14:40.383 02:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.383 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.383 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.383 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.383 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:40.383 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.383 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:40.383 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:40.383 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:40.383 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:40.383 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:40.383 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.383 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.383 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.383 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:40.383 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:40.383 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:40.383 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.383 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.643 [2024-10-13 02:28:59.077460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:40.643 [2024-10-13 02:28:59.077574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.643 [2024-10-13 02:28:59.077611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:40.643 [2024-10-13 02:28:59.077640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.643 [2024-10-13 02:28:59.079744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.643 [2024-10-13 02:28:59.079817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:40.643 [2024-10-13 02:28:59.079919] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:40.643 [2024-10-13 02:28:59.079992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:40.643 pt2 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.643 "name": "raid_bdev1", 00:14:40.643 "uuid": "d8338c1d-de7b-47ed-87ba-1eef1f7ebf14", 00:14:40.643 "strip_size_kb": 64, 00:14:40.643 "state": "configuring", 00:14:40.643 "raid_level": "raid5f", 00:14:40.643 "superblock": true, 00:14:40.643 "num_base_bdevs": 3, 00:14:40.643 "num_base_bdevs_discovered": 1, 00:14:40.643 "num_base_bdevs_operational": 2, 00:14:40.643 "base_bdevs_list": [ 00:14:40.643 { 00:14:40.643 "name": null, 00:14:40.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.643 "is_configured": false, 00:14:40.643 "data_offset": 2048, 00:14:40.643 "data_size": 63488 00:14:40.643 }, 00:14:40.643 { 00:14:40.643 "name": "pt2", 00:14:40.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.643 "is_configured": true, 00:14:40.643 "data_offset": 2048, 00:14:40.643 "data_size": 63488 00:14:40.643 }, 00:14:40.643 { 00:14:40.643 "name": null, 00:14:40.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.643 "is_configured": false, 00:14:40.643 "data_offset": 2048, 00:14:40.643 "data_size": 63488 00:14:40.643 } 00:14:40.643 ] 00:14:40.643 }' 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.643 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.903 [2024-10-13 02:28:59.484747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:40.903 [2024-10-13 02:28:59.484796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.903 [2024-10-13 02:28:59.484813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:40.903 [2024-10-13 02:28:59.484821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.903 [2024-10-13 02:28:59.485162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.903 [2024-10-13 02:28:59.485189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:40.903 [2024-10-13 02:28:59.485247] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:40.903 [2024-10-13 02:28:59.485265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:40.903 [2024-10-13 02:28:59.485350] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:40.903 [2024-10-13 02:28:59.485362] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:40.903 [2024-10-13 02:28:59.485563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:40.903 [2024-10-13 02:28:59.485999] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:40.903 [2024-10-13 02:28:59.486013] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:14:40.903 [2024-10-13 02:28:59.486217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.903 pt3 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.903 "name": "raid_bdev1", 00:14:40.903 "uuid": "d8338c1d-de7b-47ed-87ba-1eef1f7ebf14", 00:14:40.903 "strip_size_kb": 64, 00:14:40.903 "state": "online", 00:14:40.903 "raid_level": "raid5f", 00:14:40.903 "superblock": true, 00:14:40.903 "num_base_bdevs": 3, 00:14:40.903 "num_base_bdevs_discovered": 2, 00:14:40.903 "num_base_bdevs_operational": 2, 00:14:40.903 "base_bdevs_list": [ 00:14:40.903 { 00:14:40.903 "name": null, 00:14:40.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.903 "is_configured": false, 00:14:40.903 "data_offset": 2048, 00:14:40.903 "data_size": 63488 00:14:40.903 }, 00:14:40.903 { 00:14:40.903 "name": "pt2", 00:14:40.903 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.903 "is_configured": true, 00:14:40.903 "data_offset": 2048, 00:14:40.903 "data_size": 63488 00:14:40.903 }, 00:14:40.903 { 00:14:40.903 "name": "pt3", 00:14:40.903 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.903 "is_configured": true, 00:14:40.903 "data_offset": 2048, 00:14:40.903 "data_size": 63488 00:14:40.903 } 00:14:40.903 ] 00:14:40.903 }' 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.903 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.474 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.474 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.474 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.474 [2024-10-13 02:28:59.939977] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.474 [2024-10-13 02:28:59.940003] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.474 [2024-10-13 02:28:59.940063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.474 [2024-10-13 02:28:59.940116] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.474 [2024-10-13 02:28:59.940126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:14:41.474 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.474 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.474 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.474 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:41.474 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.474 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.474 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:41.474 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:41.474 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:41.474 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:41.474 02:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:41.474 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.474 02:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.474 [2024-10-13 02:29:00.011859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:41.474 [2024-10-13 02:29:00.011983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.474 [2024-10-13 02:29:00.012006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:41.474 [2024-10-13 02:29:00.012019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.474 [2024-10-13 02:29:00.014427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.474 [2024-10-13 02:29:00.014468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:41.474 [2024-10-13 02:29:00.014544] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:41.474 [2024-10-13 02:29:00.014591] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:41.474 [2024-10-13 02:29:00.014721] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:41.474 [2024-10-13 02:29:00.014741] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.474 [2024-10-13 02:29:00.014760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:14:41.474 [2024-10-13 02:29:00.014799] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:41.474 pt1 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.474 "name": "raid_bdev1", 00:14:41.474 "uuid": "d8338c1d-de7b-47ed-87ba-1eef1f7ebf14", 00:14:41.474 "strip_size_kb": 64, 00:14:41.474 "state": "configuring", 00:14:41.474 "raid_level": "raid5f", 00:14:41.474 "superblock": true, 00:14:41.474 "num_base_bdevs": 3, 00:14:41.474 "num_base_bdevs_discovered": 1, 00:14:41.474 "num_base_bdevs_operational": 2, 00:14:41.474 "base_bdevs_list": [ 00:14:41.474 { 00:14:41.474 "name": null, 00:14:41.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.474 "is_configured": false, 00:14:41.474 "data_offset": 2048, 00:14:41.474 "data_size": 63488 00:14:41.474 }, 00:14:41.474 { 00:14:41.474 "name": "pt2", 00:14:41.474 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.474 "is_configured": true, 00:14:41.474 "data_offset": 2048, 00:14:41.474 "data_size": 63488 00:14:41.474 }, 00:14:41.474 { 00:14:41.474 "name": null, 00:14:41.474 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.474 "is_configured": false, 00:14:41.474 "data_offset": 2048, 00:14:41.474 "data_size": 63488 00:14:41.474 } 00:14:41.474 ] 00:14:41.474 }' 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.474 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.044 [2024-10-13 02:29:00.507012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:42.044 [2024-10-13 02:29:00.507136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.044 [2024-10-13 02:29:00.507172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:42.044 [2024-10-13 02:29:00.507202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.044 [2024-10-13 02:29:00.507634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.044 [2024-10-13 02:29:00.507701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:42.044 [2024-10-13 02:29:00.507806] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:42.044 [2024-10-13 02:29:00.507879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:42.044 [2024-10-13 02:29:00.508008] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:14:42.044 [2024-10-13 02:29:00.508049] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:42.044 [2024-10-13 02:29:00.508295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:42.044 [2024-10-13 02:29:00.508762] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:14:42.044 [2024-10-13 02:29:00.508808] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:14:42.044 [2024-10-13 02:29:00.509016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.044 pt3 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.044 "name": "raid_bdev1", 00:14:42.044 "uuid": "d8338c1d-de7b-47ed-87ba-1eef1f7ebf14", 00:14:42.044 "strip_size_kb": 64, 00:14:42.044 "state": "online", 00:14:42.044 "raid_level": "raid5f", 00:14:42.044 "superblock": true, 00:14:42.044 "num_base_bdevs": 3, 00:14:42.044 "num_base_bdevs_discovered": 2, 00:14:42.044 "num_base_bdevs_operational": 2, 00:14:42.044 "base_bdevs_list": [ 00:14:42.044 { 00:14:42.044 "name": null, 00:14:42.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.044 "is_configured": false, 00:14:42.044 "data_offset": 2048, 00:14:42.044 "data_size": 63488 00:14:42.044 }, 00:14:42.044 { 00:14:42.044 "name": "pt2", 00:14:42.044 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.044 "is_configured": true, 00:14:42.044 "data_offset": 2048, 00:14:42.044 "data_size": 63488 00:14:42.044 }, 00:14:42.044 { 00:14:42.044 "name": "pt3", 00:14:42.044 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.044 "is_configured": true, 00:14:42.044 "data_offset": 2048, 00:14:42.044 "data_size": 63488 00:14:42.044 } 00:14:42.044 ] 00:14:42.044 }' 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.044 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.304 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:42.304 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:42.304 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.304 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.304 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.304 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:42.304 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:42.304 02:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:42.304 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.304 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.304 [2024-10-13 02:29:00.974407] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.565 02:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.565 02:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d8338c1d-de7b-47ed-87ba-1eef1f7ebf14 '!=' d8338c1d-de7b-47ed-87ba-1eef1f7ebf14 ']' 00:14:42.565 02:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91597 00:14:42.565 02:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 91597 ']' 00:14:42.565 02:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 91597 00:14:42.565 02:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:42.565 02:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:42.565 02:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91597 00:14:42.565 02:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:42.565 02:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:42.565 02:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91597' 00:14:42.565 killing process with pid 91597 00:14:42.565 02:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 91597 00:14:42.565 [2024-10-13 02:29:01.047442] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.565 [2024-10-13 02:29:01.047538] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.565 [2024-10-13 02:29:01.047619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.565 [2024-10-13 02:29:01.047629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:14:42.565 02:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 91597 00:14:42.565 [2024-10-13 02:29:01.082008] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:42.826 02:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:42.826 00:14:42.826 real 0m6.526s 00:14:42.826 user 0m10.898s 00:14:42.826 sys 0m1.419s 00:14:42.826 02:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:42.826 ************************************ 00:14:42.826 END TEST raid5f_superblock_test 00:14:42.826 ************************************ 00:14:42.826 02:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.826 02:29:01 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:42.826 02:29:01 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:42.826 02:29:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:42.826 02:29:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:42.826 02:29:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:42.826 ************************************ 00:14:42.826 START TEST raid5f_rebuild_test 00:14:42.826 ************************************ 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=92024 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 92024 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 92024 ']' 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:42.826 02:29:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.086 [2024-10-13 02:29:01.514002] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:43.086 [2024-10-13 02:29:01.514246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:43.086 Zero copy mechanism will not be used. 00:14:43.086 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92024 ] 00:14:43.086 [2024-10-13 02:29:01.659338] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.086 [2024-10-13 02:29:01.704459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.086 [2024-10-13 02:29:01.747544] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.086 [2024-10-13 02:29:01.747699] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.026 BaseBdev1_malloc 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.026 [2024-10-13 02:29:02.365720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:44.026 [2024-10-13 02:29:02.365783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.026 [2024-10-13 02:29:02.365828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:44.026 [2024-10-13 02:29:02.365847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.026 [2024-10-13 02:29:02.367951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.026 [2024-10-13 02:29:02.368039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:44.026 BaseBdev1 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.026 BaseBdev2_malloc 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.026 [2024-10-13 02:29:02.411312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:44.026 [2024-10-13 02:29:02.411436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.026 [2024-10-13 02:29:02.411496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:44.026 [2024-10-13 02:29:02.411523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.026 [2024-10-13 02:29:02.416270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.026 [2024-10-13 02:29:02.416337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:44.026 BaseBdev2 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.026 BaseBdev3_malloc 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.026 [2024-10-13 02:29:02.442175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:44.026 [2024-10-13 02:29:02.442237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.026 [2024-10-13 02:29:02.442264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:44.026 [2024-10-13 02:29:02.442273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.026 [2024-10-13 02:29:02.444377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.026 [2024-10-13 02:29:02.444414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:44.026 BaseBdev3 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.026 spare_malloc 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.026 spare_delay 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.026 [2024-10-13 02:29:02.482699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:44.026 [2024-10-13 02:29:02.482757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.026 [2024-10-13 02:29:02.482782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:44.026 [2024-10-13 02:29:02.482790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.026 [2024-10-13 02:29:02.484848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.026 [2024-10-13 02:29:02.484898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:44.026 spare 00:14:44.026 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.027 [2024-10-13 02:29:02.494746] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.027 [2024-10-13 02:29:02.496608] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.027 [2024-10-13 02:29:02.496718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.027 [2024-10-13 02:29:02.496796] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:44.027 [2024-10-13 02:29:02.496807] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:44.027 [2024-10-13 02:29:02.497058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:44.027 [2024-10-13 02:29:02.497464] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:44.027 [2024-10-13 02:29:02.497475] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:44.027 [2024-10-13 02:29:02.497595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.027 "name": "raid_bdev1", 00:14:44.027 "uuid": "1cbe1488-ec5a-46a1-ad80-1fe0d6f39b3c", 00:14:44.027 "strip_size_kb": 64, 00:14:44.027 "state": "online", 00:14:44.027 "raid_level": "raid5f", 00:14:44.027 "superblock": false, 00:14:44.027 "num_base_bdevs": 3, 00:14:44.027 "num_base_bdevs_discovered": 3, 00:14:44.027 "num_base_bdevs_operational": 3, 00:14:44.027 "base_bdevs_list": [ 00:14:44.027 { 00:14:44.027 "name": "BaseBdev1", 00:14:44.027 "uuid": "3ed85783-fd02-514a-a8f6-c737559c98f2", 00:14:44.027 "is_configured": true, 00:14:44.027 "data_offset": 0, 00:14:44.027 "data_size": 65536 00:14:44.027 }, 00:14:44.027 { 00:14:44.027 "name": "BaseBdev2", 00:14:44.027 "uuid": "f3f48cdc-f03a-5557-86a3-754eacb79422", 00:14:44.027 "is_configured": true, 00:14:44.027 "data_offset": 0, 00:14:44.027 "data_size": 65536 00:14:44.027 }, 00:14:44.027 { 00:14:44.027 "name": "BaseBdev3", 00:14:44.027 "uuid": "6fc6b1b7-dbe0-5cd6-9a8e-34c3e05d1a61", 00:14:44.027 "is_configured": true, 00:14:44.027 "data_offset": 0, 00:14:44.027 "data_size": 65536 00:14:44.027 } 00:14:44.027 ] 00:14:44.027 }' 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.027 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.287 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:44.287 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:44.287 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.287 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.287 [2024-10-13 02:29:02.902410] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.287 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.287 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:44.287 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.287 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:44.287 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.287 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.287 02:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.547 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:44.547 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:44.547 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:44.547 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:44.547 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:44.547 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:44.547 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:44.547 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:44.547 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:44.547 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:44.547 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:44.547 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:44.547 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.547 02:29:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:44.547 [2024-10-13 02:29:03.153887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:44.547 /dev/nbd0 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:44.547 1+0 records in 00:14:44.547 1+0 records out 00:14:44.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388784 s, 10.5 MB/s 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:44.547 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:45.116 512+0 records in 00:14:45.116 512+0 records out 00:14:45.116 67108864 bytes (67 MB, 64 MiB) copied, 0.296227 s, 227 MB/s 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:45.116 [2024-10-13 02:29:03.740601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.116 [2024-10-13 02:29:03.752599] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.116 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.376 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.376 "name": "raid_bdev1", 00:14:45.376 "uuid": "1cbe1488-ec5a-46a1-ad80-1fe0d6f39b3c", 00:14:45.376 "strip_size_kb": 64, 00:14:45.376 "state": "online", 00:14:45.376 "raid_level": "raid5f", 00:14:45.376 "superblock": false, 00:14:45.376 "num_base_bdevs": 3, 00:14:45.376 "num_base_bdevs_discovered": 2, 00:14:45.376 "num_base_bdevs_operational": 2, 00:14:45.376 "base_bdevs_list": [ 00:14:45.376 { 00:14:45.376 "name": null, 00:14:45.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.376 "is_configured": false, 00:14:45.376 "data_offset": 0, 00:14:45.376 "data_size": 65536 00:14:45.376 }, 00:14:45.376 { 00:14:45.376 "name": "BaseBdev2", 00:14:45.376 "uuid": "f3f48cdc-f03a-5557-86a3-754eacb79422", 00:14:45.376 "is_configured": true, 00:14:45.376 "data_offset": 0, 00:14:45.376 "data_size": 65536 00:14:45.376 }, 00:14:45.376 { 00:14:45.376 "name": "BaseBdev3", 00:14:45.376 "uuid": "6fc6b1b7-dbe0-5cd6-9a8e-34c3e05d1a61", 00:14:45.376 "is_configured": true, 00:14:45.376 "data_offset": 0, 00:14:45.376 "data_size": 65536 00:14:45.376 } 00:14:45.376 ] 00:14:45.376 }' 00:14:45.376 02:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.376 02:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.636 02:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:45.636 02:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.636 02:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.637 [2024-10-13 02:29:04.223884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.637 [2024-10-13 02:29:04.227865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027cd0 00:14:45.637 02:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.637 02:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:45.637 [2024-10-13 02:29:04.230152] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:46.577 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.577 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.577 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.577 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.577 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.577 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.577 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.577 02:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.577 02:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.838 "name": "raid_bdev1", 00:14:46.838 "uuid": "1cbe1488-ec5a-46a1-ad80-1fe0d6f39b3c", 00:14:46.838 "strip_size_kb": 64, 00:14:46.838 "state": "online", 00:14:46.838 "raid_level": "raid5f", 00:14:46.838 "superblock": false, 00:14:46.838 "num_base_bdevs": 3, 00:14:46.838 "num_base_bdevs_discovered": 3, 00:14:46.838 "num_base_bdevs_operational": 3, 00:14:46.838 "process": { 00:14:46.838 "type": "rebuild", 00:14:46.838 "target": "spare", 00:14:46.838 "progress": { 00:14:46.838 "blocks": 20480, 00:14:46.838 "percent": 15 00:14:46.838 } 00:14:46.838 }, 00:14:46.838 "base_bdevs_list": [ 00:14:46.838 { 00:14:46.838 "name": "spare", 00:14:46.838 "uuid": "0a3414aa-a40f-500b-b218-6777ce03b1d8", 00:14:46.838 "is_configured": true, 00:14:46.838 "data_offset": 0, 00:14:46.838 "data_size": 65536 00:14:46.838 }, 00:14:46.838 { 00:14:46.838 "name": "BaseBdev2", 00:14:46.838 "uuid": "f3f48cdc-f03a-5557-86a3-754eacb79422", 00:14:46.838 "is_configured": true, 00:14:46.838 "data_offset": 0, 00:14:46.838 "data_size": 65536 00:14:46.838 }, 00:14:46.838 { 00:14:46.838 "name": "BaseBdev3", 00:14:46.838 "uuid": "6fc6b1b7-dbe0-5cd6-9a8e-34c3e05d1a61", 00:14:46.838 "is_configured": true, 00:14:46.838 "data_offset": 0, 00:14:46.838 "data_size": 65536 00:14:46.838 } 00:14:46.838 ] 00:14:46.838 }' 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.838 [2024-10-13 02:29:05.391735] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.838 [2024-10-13 02:29:05.437684] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:46.838 [2024-10-13 02:29:05.437808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.838 [2024-10-13 02:29:05.437844] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.838 [2024-10-13 02:29:05.437911] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.838 "name": "raid_bdev1", 00:14:46.838 "uuid": "1cbe1488-ec5a-46a1-ad80-1fe0d6f39b3c", 00:14:46.838 "strip_size_kb": 64, 00:14:46.838 "state": "online", 00:14:46.838 "raid_level": "raid5f", 00:14:46.838 "superblock": false, 00:14:46.838 "num_base_bdevs": 3, 00:14:46.838 "num_base_bdevs_discovered": 2, 00:14:46.838 "num_base_bdevs_operational": 2, 00:14:46.838 "base_bdevs_list": [ 00:14:46.838 { 00:14:46.838 "name": null, 00:14:46.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.838 "is_configured": false, 00:14:46.838 "data_offset": 0, 00:14:46.838 "data_size": 65536 00:14:46.838 }, 00:14:46.838 { 00:14:46.838 "name": "BaseBdev2", 00:14:46.838 "uuid": "f3f48cdc-f03a-5557-86a3-754eacb79422", 00:14:46.838 "is_configured": true, 00:14:46.838 "data_offset": 0, 00:14:46.838 "data_size": 65536 00:14:46.838 }, 00:14:46.838 { 00:14:46.838 "name": "BaseBdev3", 00:14:46.838 "uuid": "6fc6b1b7-dbe0-5cd6-9a8e-34c3e05d1a61", 00:14:46.838 "is_configured": true, 00:14:46.838 "data_offset": 0, 00:14:46.838 "data_size": 65536 00:14:46.838 } 00:14:46.838 ] 00:14:46.838 }' 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.838 02:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.408 "name": "raid_bdev1", 00:14:47.408 "uuid": "1cbe1488-ec5a-46a1-ad80-1fe0d6f39b3c", 00:14:47.408 "strip_size_kb": 64, 00:14:47.408 "state": "online", 00:14:47.408 "raid_level": "raid5f", 00:14:47.408 "superblock": false, 00:14:47.408 "num_base_bdevs": 3, 00:14:47.408 "num_base_bdevs_discovered": 2, 00:14:47.408 "num_base_bdevs_operational": 2, 00:14:47.408 "base_bdevs_list": [ 00:14:47.408 { 00:14:47.408 "name": null, 00:14:47.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.408 "is_configured": false, 00:14:47.408 "data_offset": 0, 00:14:47.408 "data_size": 65536 00:14:47.408 }, 00:14:47.408 { 00:14:47.408 "name": "BaseBdev2", 00:14:47.408 "uuid": "f3f48cdc-f03a-5557-86a3-754eacb79422", 00:14:47.408 "is_configured": true, 00:14:47.408 "data_offset": 0, 00:14:47.408 "data_size": 65536 00:14:47.408 }, 00:14:47.408 { 00:14:47.408 "name": "BaseBdev3", 00:14:47.408 "uuid": "6fc6b1b7-dbe0-5cd6-9a8e-34c3e05d1a61", 00:14:47.408 "is_configured": true, 00:14:47.408 "data_offset": 0, 00:14:47.408 "data_size": 65536 00:14:47.408 } 00:14:47.408 ] 00:14:47.408 }' 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.408 [2024-10-13 02:29:05.958602] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.408 [2024-10-13 02:29:05.962411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:14:47.408 [2024-10-13 02:29:05.964634] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:47.408 02:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.409 02:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:48.349 02:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.349 02:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.349 02:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.349 02:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.349 02:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.349 02:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.349 02:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.349 02:29:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.349 02:29:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.349 02:29:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.349 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.349 "name": "raid_bdev1", 00:14:48.349 "uuid": "1cbe1488-ec5a-46a1-ad80-1fe0d6f39b3c", 00:14:48.349 "strip_size_kb": 64, 00:14:48.349 "state": "online", 00:14:48.349 "raid_level": "raid5f", 00:14:48.349 "superblock": false, 00:14:48.349 "num_base_bdevs": 3, 00:14:48.349 "num_base_bdevs_discovered": 3, 00:14:48.349 "num_base_bdevs_operational": 3, 00:14:48.349 "process": { 00:14:48.349 "type": "rebuild", 00:14:48.349 "target": "spare", 00:14:48.349 "progress": { 00:14:48.349 "blocks": 20480, 00:14:48.349 "percent": 15 00:14:48.349 } 00:14:48.349 }, 00:14:48.349 "base_bdevs_list": [ 00:14:48.349 { 00:14:48.349 "name": "spare", 00:14:48.349 "uuid": "0a3414aa-a40f-500b-b218-6777ce03b1d8", 00:14:48.349 "is_configured": true, 00:14:48.349 "data_offset": 0, 00:14:48.349 "data_size": 65536 00:14:48.349 }, 00:14:48.349 { 00:14:48.349 "name": "BaseBdev2", 00:14:48.349 "uuid": "f3f48cdc-f03a-5557-86a3-754eacb79422", 00:14:48.349 "is_configured": true, 00:14:48.349 "data_offset": 0, 00:14:48.349 "data_size": 65536 00:14:48.349 }, 00:14:48.349 { 00:14:48.349 "name": "BaseBdev3", 00:14:48.349 "uuid": "6fc6b1b7-dbe0-5cd6-9a8e-34c3e05d1a61", 00:14:48.349 "is_configured": true, 00:14:48.349 "data_offset": 0, 00:14:48.349 "data_size": 65536 00:14:48.349 } 00:14:48.349 ] 00:14:48.349 }' 00:14:48.349 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=455 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.610 "name": "raid_bdev1", 00:14:48.610 "uuid": "1cbe1488-ec5a-46a1-ad80-1fe0d6f39b3c", 00:14:48.610 "strip_size_kb": 64, 00:14:48.610 "state": "online", 00:14:48.610 "raid_level": "raid5f", 00:14:48.610 "superblock": false, 00:14:48.610 "num_base_bdevs": 3, 00:14:48.610 "num_base_bdevs_discovered": 3, 00:14:48.610 "num_base_bdevs_operational": 3, 00:14:48.610 "process": { 00:14:48.610 "type": "rebuild", 00:14:48.610 "target": "spare", 00:14:48.610 "progress": { 00:14:48.610 "blocks": 22528, 00:14:48.610 "percent": 17 00:14:48.610 } 00:14:48.610 }, 00:14:48.610 "base_bdevs_list": [ 00:14:48.610 { 00:14:48.610 "name": "spare", 00:14:48.610 "uuid": "0a3414aa-a40f-500b-b218-6777ce03b1d8", 00:14:48.610 "is_configured": true, 00:14:48.610 "data_offset": 0, 00:14:48.610 "data_size": 65536 00:14:48.610 }, 00:14:48.610 { 00:14:48.610 "name": "BaseBdev2", 00:14:48.610 "uuid": "f3f48cdc-f03a-5557-86a3-754eacb79422", 00:14:48.610 "is_configured": true, 00:14:48.610 "data_offset": 0, 00:14:48.610 "data_size": 65536 00:14:48.610 }, 00:14:48.610 { 00:14:48.610 "name": "BaseBdev3", 00:14:48.610 "uuid": "6fc6b1b7-dbe0-5cd6-9a8e-34c3e05d1a61", 00:14:48.610 "is_configured": true, 00:14:48.610 "data_offset": 0, 00:14:48.610 "data_size": 65536 00:14:48.610 } 00:14:48.610 ] 00:14:48.610 }' 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.610 02:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:49.992 02:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.992 02:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.992 02:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.992 02:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.992 02:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.992 02:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.992 02:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.992 02:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.992 02:29:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.992 02:29:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.992 02:29:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.992 02:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.992 "name": "raid_bdev1", 00:14:49.992 "uuid": "1cbe1488-ec5a-46a1-ad80-1fe0d6f39b3c", 00:14:49.992 "strip_size_kb": 64, 00:14:49.992 "state": "online", 00:14:49.992 "raid_level": "raid5f", 00:14:49.992 "superblock": false, 00:14:49.992 "num_base_bdevs": 3, 00:14:49.992 "num_base_bdevs_discovered": 3, 00:14:49.992 "num_base_bdevs_operational": 3, 00:14:49.992 "process": { 00:14:49.992 "type": "rebuild", 00:14:49.992 "target": "spare", 00:14:49.992 "progress": { 00:14:49.992 "blocks": 47104, 00:14:49.992 "percent": 35 00:14:49.992 } 00:14:49.992 }, 00:14:49.992 "base_bdevs_list": [ 00:14:49.992 { 00:14:49.992 "name": "spare", 00:14:49.992 "uuid": "0a3414aa-a40f-500b-b218-6777ce03b1d8", 00:14:49.992 "is_configured": true, 00:14:49.992 "data_offset": 0, 00:14:49.992 "data_size": 65536 00:14:49.992 }, 00:14:49.992 { 00:14:49.992 "name": "BaseBdev2", 00:14:49.992 "uuid": "f3f48cdc-f03a-5557-86a3-754eacb79422", 00:14:49.992 "is_configured": true, 00:14:49.992 "data_offset": 0, 00:14:49.992 "data_size": 65536 00:14:49.992 }, 00:14:49.992 { 00:14:49.992 "name": "BaseBdev3", 00:14:49.992 "uuid": "6fc6b1b7-dbe0-5cd6-9a8e-34c3e05d1a61", 00:14:49.992 "is_configured": true, 00:14:49.992 "data_offset": 0, 00:14:49.992 "data_size": 65536 00:14:49.992 } 00:14:49.992 ] 00:14:49.992 }' 00:14:49.992 02:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.992 02:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.992 02:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.992 02:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.992 02:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:50.933 02:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.933 02:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.933 02:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.933 02:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.933 02:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.933 02:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.933 02:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.933 02:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.933 02:29:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.933 02:29:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.933 02:29:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.933 02:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.933 "name": "raid_bdev1", 00:14:50.933 "uuid": "1cbe1488-ec5a-46a1-ad80-1fe0d6f39b3c", 00:14:50.933 "strip_size_kb": 64, 00:14:50.933 "state": "online", 00:14:50.933 "raid_level": "raid5f", 00:14:50.933 "superblock": false, 00:14:50.933 "num_base_bdevs": 3, 00:14:50.933 "num_base_bdevs_discovered": 3, 00:14:50.933 "num_base_bdevs_operational": 3, 00:14:50.933 "process": { 00:14:50.933 "type": "rebuild", 00:14:50.933 "target": "spare", 00:14:50.933 "progress": { 00:14:50.933 "blocks": 69632, 00:14:50.933 "percent": 53 00:14:50.933 } 00:14:50.933 }, 00:14:50.933 "base_bdevs_list": [ 00:14:50.933 { 00:14:50.933 "name": "spare", 00:14:50.933 "uuid": "0a3414aa-a40f-500b-b218-6777ce03b1d8", 00:14:50.933 "is_configured": true, 00:14:50.933 "data_offset": 0, 00:14:50.933 "data_size": 65536 00:14:50.933 }, 00:14:50.933 { 00:14:50.933 "name": "BaseBdev2", 00:14:50.933 "uuid": "f3f48cdc-f03a-5557-86a3-754eacb79422", 00:14:50.933 "is_configured": true, 00:14:50.933 "data_offset": 0, 00:14:50.933 "data_size": 65536 00:14:50.933 }, 00:14:50.933 { 00:14:50.933 "name": "BaseBdev3", 00:14:50.933 "uuid": "6fc6b1b7-dbe0-5cd6-9a8e-34c3e05d1a61", 00:14:50.933 "is_configured": true, 00:14:50.933 "data_offset": 0, 00:14:50.933 "data_size": 65536 00:14:50.933 } 00:14:50.933 ] 00:14:50.933 }' 00:14:50.933 02:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.933 02:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.933 02:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.933 02:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.933 02:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:52.315 02:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.315 02:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.315 02:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.315 02:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.315 02:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.315 02:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.315 02:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.315 02:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.315 02:29:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.315 02:29:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.315 02:29:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.315 02:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.315 "name": "raid_bdev1", 00:14:52.315 "uuid": "1cbe1488-ec5a-46a1-ad80-1fe0d6f39b3c", 00:14:52.315 "strip_size_kb": 64, 00:14:52.315 "state": "online", 00:14:52.315 "raid_level": "raid5f", 00:14:52.315 "superblock": false, 00:14:52.315 "num_base_bdevs": 3, 00:14:52.315 "num_base_bdevs_discovered": 3, 00:14:52.315 "num_base_bdevs_operational": 3, 00:14:52.315 "process": { 00:14:52.315 "type": "rebuild", 00:14:52.315 "target": "spare", 00:14:52.315 "progress": { 00:14:52.315 "blocks": 92160, 00:14:52.315 "percent": 70 00:14:52.315 } 00:14:52.315 }, 00:14:52.315 "base_bdevs_list": [ 00:14:52.315 { 00:14:52.315 "name": "spare", 00:14:52.315 "uuid": "0a3414aa-a40f-500b-b218-6777ce03b1d8", 00:14:52.315 "is_configured": true, 00:14:52.315 "data_offset": 0, 00:14:52.315 "data_size": 65536 00:14:52.315 }, 00:14:52.315 { 00:14:52.315 "name": "BaseBdev2", 00:14:52.315 "uuid": "f3f48cdc-f03a-5557-86a3-754eacb79422", 00:14:52.315 "is_configured": true, 00:14:52.315 "data_offset": 0, 00:14:52.315 "data_size": 65536 00:14:52.315 }, 00:14:52.315 { 00:14:52.315 "name": "BaseBdev3", 00:14:52.315 "uuid": "6fc6b1b7-dbe0-5cd6-9a8e-34c3e05d1a61", 00:14:52.315 "is_configured": true, 00:14:52.315 "data_offset": 0, 00:14:52.315 "data_size": 65536 00:14:52.315 } 00:14:52.315 ] 00:14:52.315 }' 00:14:52.315 02:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.315 02:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.315 02:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.315 02:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.315 02:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:53.254 02:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.254 02:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.254 02:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.254 02:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.254 02:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.254 02:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.254 02:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.254 02:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.254 02:29:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.254 02:29:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.254 02:29:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.254 02:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.254 "name": "raid_bdev1", 00:14:53.254 "uuid": "1cbe1488-ec5a-46a1-ad80-1fe0d6f39b3c", 00:14:53.254 "strip_size_kb": 64, 00:14:53.254 "state": "online", 00:14:53.254 "raid_level": "raid5f", 00:14:53.254 "superblock": false, 00:14:53.254 "num_base_bdevs": 3, 00:14:53.254 "num_base_bdevs_discovered": 3, 00:14:53.254 "num_base_bdevs_operational": 3, 00:14:53.254 "process": { 00:14:53.254 "type": "rebuild", 00:14:53.254 "target": "spare", 00:14:53.254 "progress": { 00:14:53.254 "blocks": 116736, 00:14:53.254 "percent": 89 00:14:53.254 } 00:14:53.254 }, 00:14:53.254 "base_bdevs_list": [ 00:14:53.254 { 00:14:53.254 "name": "spare", 00:14:53.254 "uuid": "0a3414aa-a40f-500b-b218-6777ce03b1d8", 00:14:53.254 "is_configured": true, 00:14:53.254 "data_offset": 0, 00:14:53.254 "data_size": 65536 00:14:53.254 }, 00:14:53.254 { 00:14:53.254 "name": "BaseBdev2", 00:14:53.254 "uuid": "f3f48cdc-f03a-5557-86a3-754eacb79422", 00:14:53.254 "is_configured": true, 00:14:53.254 "data_offset": 0, 00:14:53.254 "data_size": 65536 00:14:53.254 }, 00:14:53.254 { 00:14:53.254 "name": "BaseBdev3", 00:14:53.254 "uuid": "6fc6b1b7-dbe0-5cd6-9a8e-34c3e05d1a61", 00:14:53.254 "is_configured": true, 00:14:53.254 "data_offset": 0, 00:14:53.254 "data_size": 65536 00:14:53.254 } 00:14:53.254 ] 00:14:53.254 }' 00:14:53.254 02:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.254 02:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.254 02:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.254 02:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.254 02:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:53.823 [2024-10-13 02:29:12.403235] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:53.823 [2024-10-13 02:29:12.403378] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:53.823 [2024-10-13 02:29:12.403446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.393 "name": "raid_bdev1", 00:14:54.393 "uuid": "1cbe1488-ec5a-46a1-ad80-1fe0d6f39b3c", 00:14:54.393 "strip_size_kb": 64, 00:14:54.393 "state": "online", 00:14:54.393 "raid_level": "raid5f", 00:14:54.393 "superblock": false, 00:14:54.393 "num_base_bdevs": 3, 00:14:54.393 "num_base_bdevs_discovered": 3, 00:14:54.393 "num_base_bdevs_operational": 3, 00:14:54.393 "base_bdevs_list": [ 00:14:54.393 { 00:14:54.393 "name": "spare", 00:14:54.393 "uuid": "0a3414aa-a40f-500b-b218-6777ce03b1d8", 00:14:54.393 "is_configured": true, 00:14:54.393 "data_offset": 0, 00:14:54.393 "data_size": 65536 00:14:54.393 }, 00:14:54.393 { 00:14:54.393 "name": "BaseBdev2", 00:14:54.393 "uuid": "f3f48cdc-f03a-5557-86a3-754eacb79422", 00:14:54.393 "is_configured": true, 00:14:54.393 "data_offset": 0, 00:14:54.393 "data_size": 65536 00:14:54.393 }, 00:14:54.393 { 00:14:54.393 "name": "BaseBdev3", 00:14:54.393 "uuid": "6fc6b1b7-dbe0-5cd6-9a8e-34c3e05d1a61", 00:14:54.393 "is_configured": true, 00:14:54.393 "data_offset": 0, 00:14:54.393 "data_size": 65536 00:14:54.393 } 00:14:54.393 ] 00:14:54.393 }' 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.393 02:29:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.393 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.393 "name": "raid_bdev1", 00:14:54.393 "uuid": "1cbe1488-ec5a-46a1-ad80-1fe0d6f39b3c", 00:14:54.393 "strip_size_kb": 64, 00:14:54.393 "state": "online", 00:14:54.393 "raid_level": "raid5f", 00:14:54.393 "superblock": false, 00:14:54.393 "num_base_bdevs": 3, 00:14:54.393 "num_base_bdevs_discovered": 3, 00:14:54.393 "num_base_bdevs_operational": 3, 00:14:54.393 "base_bdevs_list": [ 00:14:54.393 { 00:14:54.393 "name": "spare", 00:14:54.393 "uuid": "0a3414aa-a40f-500b-b218-6777ce03b1d8", 00:14:54.393 "is_configured": true, 00:14:54.393 "data_offset": 0, 00:14:54.393 "data_size": 65536 00:14:54.393 }, 00:14:54.393 { 00:14:54.393 "name": "BaseBdev2", 00:14:54.393 "uuid": "f3f48cdc-f03a-5557-86a3-754eacb79422", 00:14:54.393 "is_configured": true, 00:14:54.393 "data_offset": 0, 00:14:54.393 "data_size": 65536 00:14:54.393 }, 00:14:54.393 { 00:14:54.393 "name": "BaseBdev3", 00:14:54.393 "uuid": "6fc6b1b7-dbe0-5cd6-9a8e-34c3e05d1a61", 00:14:54.393 "is_configured": true, 00:14:54.393 "data_offset": 0, 00:14:54.393 "data_size": 65536 00:14:54.393 } 00:14:54.393 ] 00:14:54.393 }' 00:14:54.393 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.393 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:54.393 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.653 "name": "raid_bdev1", 00:14:54.653 "uuid": "1cbe1488-ec5a-46a1-ad80-1fe0d6f39b3c", 00:14:54.653 "strip_size_kb": 64, 00:14:54.653 "state": "online", 00:14:54.653 "raid_level": "raid5f", 00:14:54.653 "superblock": false, 00:14:54.653 "num_base_bdevs": 3, 00:14:54.653 "num_base_bdevs_discovered": 3, 00:14:54.653 "num_base_bdevs_operational": 3, 00:14:54.653 "base_bdevs_list": [ 00:14:54.653 { 00:14:54.653 "name": "spare", 00:14:54.653 "uuid": "0a3414aa-a40f-500b-b218-6777ce03b1d8", 00:14:54.653 "is_configured": true, 00:14:54.653 "data_offset": 0, 00:14:54.653 "data_size": 65536 00:14:54.653 }, 00:14:54.653 { 00:14:54.653 "name": "BaseBdev2", 00:14:54.653 "uuid": "f3f48cdc-f03a-5557-86a3-754eacb79422", 00:14:54.653 "is_configured": true, 00:14:54.653 "data_offset": 0, 00:14:54.653 "data_size": 65536 00:14:54.653 }, 00:14:54.653 { 00:14:54.653 "name": "BaseBdev3", 00:14:54.653 "uuid": "6fc6b1b7-dbe0-5cd6-9a8e-34c3e05d1a61", 00:14:54.653 "is_configured": true, 00:14:54.653 "data_offset": 0, 00:14:54.653 "data_size": 65536 00:14:54.653 } 00:14:54.653 ] 00:14:54.653 }' 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.653 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.225 [2024-10-13 02:29:13.602789] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:55.225 [2024-10-13 02:29:13.602825] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.225 [2024-10-13 02:29:13.602954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.225 [2024-10-13 02:29:13.603040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.225 [2024-10-13 02:29:13.603056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:55.225 /dev/nbd0 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:55.225 1+0 records in 00:14:55.225 1+0 records out 00:14:55.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469824 s, 8.7 MB/s 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:55.225 02:29:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:55.498 /dev/nbd1 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:55.498 1+0 records in 00:14:55.498 1+0 records out 00:14:55.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339604 s, 12.1 MB/s 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:55.498 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:55.780 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:55.780 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:55.780 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:55.780 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:55.780 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:55.780 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:55.780 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:55.780 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:55.780 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:55.780 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:55.780 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:55.780 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:55.780 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:55.780 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:55.780 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:55.780 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:55.780 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 92024 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 92024 ']' 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 92024 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92024 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:56.040 killing process with pid 92024 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92024' 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 92024 00:14:56.040 Received shutdown signal, test time was about 60.000000 seconds 00:14:56.040 00:14:56.040 Latency(us) 00:14:56.040 [2024-10-13T02:29:14.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.040 [2024-10-13T02:29:14.724Z] =================================================================================================================== 00:14:56.040 [2024-10-13T02:29:14.724Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:56.040 [2024-10-13 02:29:14.703426] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:56.040 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 92024 00:14:56.301 [2024-10-13 02:29:14.743421] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:56.301 02:29:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:56.301 00:14:56.301 real 0m13.554s 00:14:56.301 user 0m16.930s 00:14:56.301 sys 0m1.960s 00:14:56.301 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:56.301 02:29:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.301 ************************************ 00:14:56.301 END TEST raid5f_rebuild_test 00:14:56.301 ************************************ 00:14:56.560 02:29:15 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:14:56.560 02:29:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:56.560 02:29:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:56.560 02:29:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:56.560 ************************************ 00:14:56.561 START TEST raid5f_rebuild_test_sb 00:14:56.561 ************************************ 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92448 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92448 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92448 ']' 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:56.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:56.561 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.561 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:56.561 Zero copy mechanism will not be used. 00:14:56.561 [2024-10-13 02:29:15.134469] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:56.561 [2024-10-13 02:29:15.134588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92448 ] 00:14:56.821 [2024-10-13 02:29:15.278259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.821 [2024-10-13 02:29:15.323457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.821 [2024-10-13 02:29:15.366293] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.821 [2024-10-13 02:29:15.366337] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.390 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:57.390 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:57.390 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:57.390 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:57.390 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.390 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.390 BaseBdev1_malloc 00:14:57.390 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.390 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:57.390 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.390 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.390 [2024-10-13 02:29:15.972792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:57.390 [2024-10-13 02:29:15.972865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.390 [2024-10-13 02:29:15.972914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:57.390 [2024-10-13 02:29:15.972934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.390 [2024-10-13 02:29:15.975040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.390 [2024-10-13 02:29:15.975075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:57.390 BaseBdev1 00:14:57.390 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.390 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:57.390 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:57.390 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.390 02:29:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.390 BaseBdev2_malloc 00:14:57.390 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.390 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:57.390 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.390 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.390 [2024-10-13 02:29:16.012813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:57.390 [2024-10-13 02:29:16.012895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.390 [2024-10-13 02:29:16.012926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:57.390 [2024-10-13 02:29:16.012940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.390 [2024-10-13 02:29:16.016182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.390 [2024-10-13 02:29:16.016232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:57.390 BaseBdev2 00:14:57.390 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.390 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:57.390 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:57.390 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.390 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.390 BaseBdev3_malloc 00:14:57.390 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.390 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:57.391 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.391 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.391 [2024-10-13 02:29:16.037487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:57.391 [2024-10-13 02:29:16.037547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.391 [2024-10-13 02:29:16.037572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:57.391 [2024-10-13 02:29:16.037580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.391 [2024-10-13 02:29:16.039626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.391 [2024-10-13 02:29:16.039660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:57.391 BaseBdev3 00:14:57.391 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.391 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:57.391 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.391 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.391 spare_malloc 00:14:57.391 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.391 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:57.391 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.391 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.391 spare_delay 00:14:57.391 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.391 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:57.391 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.391 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.391 [2024-10-13 02:29:16.069828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:57.391 [2024-10-13 02:29:16.069909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.391 [2024-10-13 02:29:16.069935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:57.391 [2024-10-13 02:29:16.069944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.391 [2024-10-13 02:29:16.071944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.391 [2024-10-13 02:29:16.071977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:57.651 spare 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.651 [2024-10-13 02:29:16.081902] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.651 [2024-10-13 02:29:16.083653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.651 [2024-10-13 02:29:16.083712] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.651 [2024-10-13 02:29:16.083854] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:57.651 [2024-10-13 02:29:16.083878] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:57.651 [2024-10-13 02:29:16.084131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:57.651 [2024-10-13 02:29:16.084551] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:57.651 [2024-10-13 02:29:16.084576] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:57.651 [2024-10-13 02:29:16.084691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.651 "name": "raid_bdev1", 00:14:57.651 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:14:57.651 "strip_size_kb": 64, 00:14:57.651 "state": "online", 00:14:57.651 "raid_level": "raid5f", 00:14:57.651 "superblock": true, 00:14:57.651 "num_base_bdevs": 3, 00:14:57.651 "num_base_bdevs_discovered": 3, 00:14:57.651 "num_base_bdevs_operational": 3, 00:14:57.651 "base_bdevs_list": [ 00:14:57.651 { 00:14:57.651 "name": "BaseBdev1", 00:14:57.651 "uuid": "73297bef-34eb-56c9-a9c5-26d2df6ca920", 00:14:57.651 "is_configured": true, 00:14:57.651 "data_offset": 2048, 00:14:57.651 "data_size": 63488 00:14:57.651 }, 00:14:57.651 { 00:14:57.651 "name": "BaseBdev2", 00:14:57.651 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:14:57.651 "is_configured": true, 00:14:57.651 "data_offset": 2048, 00:14:57.651 "data_size": 63488 00:14:57.651 }, 00:14:57.651 { 00:14:57.651 "name": "BaseBdev3", 00:14:57.651 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:14:57.651 "is_configured": true, 00:14:57.651 "data_offset": 2048, 00:14:57.651 "data_size": 63488 00:14:57.651 } 00:14:57.651 ] 00:14:57.651 }' 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.651 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:57.911 [2024-10-13 02:29:16.485580] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:57.911 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:57.912 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:57.912 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:57.912 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:58.172 [2024-10-13 02:29:16.765004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:58.172 /dev/nbd0 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.172 1+0 records in 00:14:58.172 1+0 records out 00:14:58.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414473 s, 9.9 MB/s 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:58.172 02:29:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:14:58.742 496+0 records in 00:14:58.742 496+0 records out 00:14:58.742 65011712 bytes (65 MB, 62 MiB) copied, 0.282254 s, 230 MB/s 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:58.742 [2024-10-13 02:29:17.322848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.742 [2024-10-13 02:29:17.338911] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.742 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.742 "name": "raid_bdev1", 00:14:58.742 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:14:58.742 "strip_size_kb": 64, 00:14:58.742 "state": "online", 00:14:58.742 "raid_level": "raid5f", 00:14:58.742 "superblock": true, 00:14:58.742 "num_base_bdevs": 3, 00:14:58.742 "num_base_bdevs_discovered": 2, 00:14:58.742 "num_base_bdevs_operational": 2, 00:14:58.742 "base_bdevs_list": [ 00:14:58.743 { 00:14:58.743 "name": null, 00:14:58.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.743 "is_configured": false, 00:14:58.743 "data_offset": 0, 00:14:58.743 "data_size": 63488 00:14:58.743 }, 00:14:58.743 { 00:14:58.743 "name": "BaseBdev2", 00:14:58.743 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:14:58.743 "is_configured": true, 00:14:58.743 "data_offset": 2048, 00:14:58.743 "data_size": 63488 00:14:58.743 }, 00:14:58.743 { 00:14:58.743 "name": "BaseBdev3", 00:14:58.743 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:14:58.743 "is_configured": true, 00:14:58.743 "data_offset": 2048, 00:14:58.743 "data_size": 63488 00:14:58.743 } 00:14:58.743 ] 00:14:58.743 }' 00:14:58.743 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.743 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.312 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:59.312 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.312 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.312 [2024-10-13 02:29:17.786174] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.312 [2024-10-13 02:29:17.790086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000255d0 00:14:59.312 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.312 02:29:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:59.312 [2024-10-13 02:29:17.792326] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:00.250 02:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.250 02:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.250 02:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.250 02:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.250 02:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.250 02:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.250 02:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.250 02:29:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.250 02:29:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.250 02:29:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.250 02:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.250 "name": "raid_bdev1", 00:15:00.250 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:00.250 "strip_size_kb": 64, 00:15:00.250 "state": "online", 00:15:00.250 "raid_level": "raid5f", 00:15:00.250 "superblock": true, 00:15:00.250 "num_base_bdevs": 3, 00:15:00.250 "num_base_bdevs_discovered": 3, 00:15:00.250 "num_base_bdevs_operational": 3, 00:15:00.250 "process": { 00:15:00.250 "type": "rebuild", 00:15:00.250 "target": "spare", 00:15:00.250 "progress": { 00:15:00.250 "blocks": 20480, 00:15:00.250 "percent": 16 00:15:00.250 } 00:15:00.250 }, 00:15:00.250 "base_bdevs_list": [ 00:15:00.250 { 00:15:00.250 "name": "spare", 00:15:00.250 "uuid": "3235f4fb-609c-5ff5-9722-f779fbe7f1d6", 00:15:00.250 "is_configured": true, 00:15:00.250 "data_offset": 2048, 00:15:00.250 "data_size": 63488 00:15:00.250 }, 00:15:00.250 { 00:15:00.250 "name": "BaseBdev2", 00:15:00.250 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:00.250 "is_configured": true, 00:15:00.250 "data_offset": 2048, 00:15:00.250 "data_size": 63488 00:15:00.250 }, 00:15:00.250 { 00:15:00.250 "name": "BaseBdev3", 00:15:00.250 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:00.250 "is_configured": true, 00:15:00.250 "data_offset": 2048, 00:15:00.250 "data_size": 63488 00:15:00.250 } 00:15:00.250 ] 00:15:00.250 }' 00:15:00.250 02:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.250 02:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.250 02:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.510 02:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.510 02:29:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:00.510 02:29:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.510 02:29:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.510 [2024-10-13 02:29:18.964630] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.510 [2024-10-13 02:29:18.999660] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:00.510 [2024-10-13 02:29:18.999806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.510 [2024-10-13 02:29:18.999824] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.510 [2024-10-13 02:29:18.999842] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.510 "name": "raid_bdev1", 00:15:00.510 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:00.510 "strip_size_kb": 64, 00:15:00.510 "state": "online", 00:15:00.510 "raid_level": "raid5f", 00:15:00.510 "superblock": true, 00:15:00.510 "num_base_bdevs": 3, 00:15:00.510 "num_base_bdevs_discovered": 2, 00:15:00.510 "num_base_bdevs_operational": 2, 00:15:00.510 "base_bdevs_list": [ 00:15:00.510 { 00:15:00.510 "name": null, 00:15:00.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.510 "is_configured": false, 00:15:00.510 "data_offset": 0, 00:15:00.510 "data_size": 63488 00:15:00.510 }, 00:15:00.510 { 00:15:00.510 "name": "BaseBdev2", 00:15:00.510 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:00.510 "is_configured": true, 00:15:00.510 "data_offset": 2048, 00:15:00.510 "data_size": 63488 00:15:00.510 }, 00:15:00.510 { 00:15:00.510 "name": "BaseBdev3", 00:15:00.510 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:00.510 "is_configured": true, 00:15:00.510 "data_offset": 2048, 00:15:00.510 "data_size": 63488 00:15:00.510 } 00:15:00.510 ] 00:15:00.510 }' 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.510 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.081 "name": "raid_bdev1", 00:15:01.081 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:01.081 "strip_size_kb": 64, 00:15:01.081 "state": "online", 00:15:01.081 "raid_level": "raid5f", 00:15:01.081 "superblock": true, 00:15:01.081 "num_base_bdevs": 3, 00:15:01.081 "num_base_bdevs_discovered": 2, 00:15:01.081 "num_base_bdevs_operational": 2, 00:15:01.081 "base_bdevs_list": [ 00:15:01.081 { 00:15:01.081 "name": null, 00:15:01.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.081 "is_configured": false, 00:15:01.081 "data_offset": 0, 00:15:01.081 "data_size": 63488 00:15:01.081 }, 00:15:01.081 { 00:15:01.081 "name": "BaseBdev2", 00:15:01.081 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:01.081 "is_configured": true, 00:15:01.081 "data_offset": 2048, 00:15:01.081 "data_size": 63488 00:15:01.081 }, 00:15:01.081 { 00:15:01.081 "name": "BaseBdev3", 00:15:01.081 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:01.081 "is_configured": true, 00:15:01.081 "data_offset": 2048, 00:15:01.081 "data_size": 63488 00:15:01.081 } 00:15:01.081 ] 00:15:01.081 }' 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.081 [2024-10-13 02:29:19.640367] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.081 [2024-10-13 02:29:19.644097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:15:01.081 [2024-10-13 02:29:19.646247] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.081 02:29:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:02.022 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.022 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.022 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.022 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.022 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.022 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.022 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.022 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.022 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.022 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.022 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.022 "name": "raid_bdev1", 00:15:02.022 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:02.022 "strip_size_kb": 64, 00:15:02.022 "state": "online", 00:15:02.022 "raid_level": "raid5f", 00:15:02.022 "superblock": true, 00:15:02.022 "num_base_bdevs": 3, 00:15:02.022 "num_base_bdevs_discovered": 3, 00:15:02.022 "num_base_bdevs_operational": 3, 00:15:02.022 "process": { 00:15:02.022 "type": "rebuild", 00:15:02.022 "target": "spare", 00:15:02.022 "progress": { 00:15:02.022 "blocks": 20480, 00:15:02.022 "percent": 16 00:15:02.022 } 00:15:02.022 }, 00:15:02.022 "base_bdevs_list": [ 00:15:02.022 { 00:15:02.022 "name": "spare", 00:15:02.022 "uuid": "3235f4fb-609c-5ff5-9722-f779fbe7f1d6", 00:15:02.022 "is_configured": true, 00:15:02.022 "data_offset": 2048, 00:15:02.022 "data_size": 63488 00:15:02.022 }, 00:15:02.022 { 00:15:02.022 "name": "BaseBdev2", 00:15:02.022 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:02.022 "is_configured": true, 00:15:02.022 "data_offset": 2048, 00:15:02.022 "data_size": 63488 00:15:02.022 }, 00:15:02.022 { 00:15:02.022 "name": "BaseBdev3", 00:15:02.022 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:02.022 "is_configured": true, 00:15:02.022 "data_offset": 2048, 00:15:02.022 "data_size": 63488 00:15:02.022 } 00:15:02.022 ] 00:15:02.022 }' 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:02.283 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=468 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.283 "name": "raid_bdev1", 00:15:02.283 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:02.283 "strip_size_kb": 64, 00:15:02.283 "state": "online", 00:15:02.283 "raid_level": "raid5f", 00:15:02.283 "superblock": true, 00:15:02.283 "num_base_bdevs": 3, 00:15:02.283 "num_base_bdevs_discovered": 3, 00:15:02.283 "num_base_bdevs_operational": 3, 00:15:02.283 "process": { 00:15:02.283 "type": "rebuild", 00:15:02.283 "target": "spare", 00:15:02.283 "progress": { 00:15:02.283 "blocks": 22528, 00:15:02.283 "percent": 17 00:15:02.283 } 00:15:02.283 }, 00:15:02.283 "base_bdevs_list": [ 00:15:02.283 { 00:15:02.283 "name": "spare", 00:15:02.283 "uuid": "3235f4fb-609c-5ff5-9722-f779fbe7f1d6", 00:15:02.283 "is_configured": true, 00:15:02.283 "data_offset": 2048, 00:15:02.283 "data_size": 63488 00:15:02.283 }, 00:15:02.283 { 00:15:02.283 "name": "BaseBdev2", 00:15:02.283 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:02.283 "is_configured": true, 00:15:02.283 "data_offset": 2048, 00:15:02.283 "data_size": 63488 00:15:02.283 }, 00:15:02.283 { 00:15:02.283 "name": "BaseBdev3", 00:15:02.283 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:02.283 "is_configured": true, 00:15:02.283 "data_offset": 2048, 00:15:02.283 "data_size": 63488 00:15:02.283 } 00:15:02.283 ] 00:15:02.283 }' 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.283 02:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:03.665 02:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:03.665 02:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.665 02:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.665 02:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.665 02:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.665 02:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.665 02:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.665 02:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.665 02:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.665 02:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.665 02:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.665 02:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.665 "name": "raid_bdev1", 00:15:03.665 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:03.665 "strip_size_kb": 64, 00:15:03.666 "state": "online", 00:15:03.666 "raid_level": "raid5f", 00:15:03.666 "superblock": true, 00:15:03.666 "num_base_bdevs": 3, 00:15:03.666 "num_base_bdevs_discovered": 3, 00:15:03.666 "num_base_bdevs_operational": 3, 00:15:03.666 "process": { 00:15:03.666 "type": "rebuild", 00:15:03.666 "target": "spare", 00:15:03.666 "progress": { 00:15:03.666 "blocks": 45056, 00:15:03.666 "percent": 35 00:15:03.666 } 00:15:03.666 }, 00:15:03.666 "base_bdevs_list": [ 00:15:03.666 { 00:15:03.666 "name": "spare", 00:15:03.666 "uuid": "3235f4fb-609c-5ff5-9722-f779fbe7f1d6", 00:15:03.666 "is_configured": true, 00:15:03.666 "data_offset": 2048, 00:15:03.666 "data_size": 63488 00:15:03.666 }, 00:15:03.666 { 00:15:03.666 "name": "BaseBdev2", 00:15:03.666 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:03.666 "is_configured": true, 00:15:03.666 "data_offset": 2048, 00:15:03.666 "data_size": 63488 00:15:03.666 }, 00:15:03.666 { 00:15:03.666 "name": "BaseBdev3", 00:15:03.666 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:03.666 "is_configured": true, 00:15:03.666 "data_offset": 2048, 00:15:03.666 "data_size": 63488 00:15:03.666 } 00:15:03.666 ] 00:15:03.666 }' 00:15:03.666 02:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.666 02:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.666 02:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.666 02:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.666 02:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:04.606 02:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.606 02:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.606 02:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.606 02:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.606 02:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.606 02:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.606 02:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.606 02:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.606 02:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.606 02:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.606 02:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.606 02:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.606 "name": "raid_bdev1", 00:15:04.606 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:04.606 "strip_size_kb": 64, 00:15:04.606 "state": "online", 00:15:04.606 "raid_level": "raid5f", 00:15:04.606 "superblock": true, 00:15:04.606 "num_base_bdevs": 3, 00:15:04.606 "num_base_bdevs_discovered": 3, 00:15:04.606 "num_base_bdevs_operational": 3, 00:15:04.606 "process": { 00:15:04.606 "type": "rebuild", 00:15:04.606 "target": "spare", 00:15:04.606 "progress": { 00:15:04.606 "blocks": 69632, 00:15:04.606 "percent": 54 00:15:04.606 } 00:15:04.606 }, 00:15:04.606 "base_bdevs_list": [ 00:15:04.606 { 00:15:04.606 "name": "spare", 00:15:04.606 "uuid": "3235f4fb-609c-5ff5-9722-f779fbe7f1d6", 00:15:04.606 "is_configured": true, 00:15:04.606 "data_offset": 2048, 00:15:04.606 "data_size": 63488 00:15:04.606 }, 00:15:04.606 { 00:15:04.606 "name": "BaseBdev2", 00:15:04.606 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:04.606 "is_configured": true, 00:15:04.606 "data_offset": 2048, 00:15:04.606 "data_size": 63488 00:15:04.606 }, 00:15:04.606 { 00:15:04.606 "name": "BaseBdev3", 00:15:04.606 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:04.606 "is_configured": true, 00:15:04.606 "data_offset": 2048, 00:15:04.606 "data_size": 63488 00:15:04.606 } 00:15:04.606 ] 00:15:04.606 }' 00:15:04.606 02:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.606 02:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.606 02:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.606 02:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.606 02:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.989 02:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.989 02:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.989 02:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.989 02:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.989 02:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.989 02:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.989 02:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.989 02:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.989 02:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.989 02:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.989 02:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.989 02:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.989 "name": "raid_bdev1", 00:15:05.989 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:05.989 "strip_size_kb": 64, 00:15:05.989 "state": "online", 00:15:05.989 "raid_level": "raid5f", 00:15:05.989 "superblock": true, 00:15:05.989 "num_base_bdevs": 3, 00:15:05.989 "num_base_bdevs_discovered": 3, 00:15:05.989 "num_base_bdevs_operational": 3, 00:15:05.989 "process": { 00:15:05.989 "type": "rebuild", 00:15:05.989 "target": "spare", 00:15:05.989 "progress": { 00:15:05.989 "blocks": 92160, 00:15:05.989 "percent": 72 00:15:05.989 } 00:15:05.989 }, 00:15:05.989 "base_bdevs_list": [ 00:15:05.989 { 00:15:05.989 "name": "spare", 00:15:05.989 "uuid": "3235f4fb-609c-5ff5-9722-f779fbe7f1d6", 00:15:05.989 "is_configured": true, 00:15:05.989 "data_offset": 2048, 00:15:05.989 "data_size": 63488 00:15:05.989 }, 00:15:05.989 { 00:15:05.989 "name": "BaseBdev2", 00:15:05.989 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:05.989 "is_configured": true, 00:15:05.989 "data_offset": 2048, 00:15:05.989 "data_size": 63488 00:15:05.989 }, 00:15:05.989 { 00:15:05.989 "name": "BaseBdev3", 00:15:05.989 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:05.989 "is_configured": true, 00:15:05.989 "data_offset": 2048, 00:15:05.989 "data_size": 63488 00:15:05.989 } 00:15:05.989 ] 00:15:05.989 }' 00:15:05.989 02:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.989 02:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.989 02:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.989 02:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.989 02:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:06.929 02:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.929 02:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.929 02:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.929 02:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.929 02:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.929 02:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.929 02:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.929 02:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.929 02:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.929 02:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.929 02:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.929 02:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.929 "name": "raid_bdev1", 00:15:06.929 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:06.929 "strip_size_kb": 64, 00:15:06.929 "state": "online", 00:15:06.929 "raid_level": "raid5f", 00:15:06.929 "superblock": true, 00:15:06.929 "num_base_bdevs": 3, 00:15:06.929 "num_base_bdevs_discovered": 3, 00:15:06.929 "num_base_bdevs_operational": 3, 00:15:06.929 "process": { 00:15:06.929 "type": "rebuild", 00:15:06.929 "target": "spare", 00:15:06.929 "progress": { 00:15:06.929 "blocks": 116736, 00:15:06.929 "percent": 91 00:15:06.929 } 00:15:06.929 }, 00:15:06.929 "base_bdevs_list": [ 00:15:06.929 { 00:15:06.929 "name": "spare", 00:15:06.929 "uuid": "3235f4fb-609c-5ff5-9722-f779fbe7f1d6", 00:15:06.929 "is_configured": true, 00:15:06.929 "data_offset": 2048, 00:15:06.929 "data_size": 63488 00:15:06.929 }, 00:15:06.929 { 00:15:06.929 "name": "BaseBdev2", 00:15:06.929 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:06.929 "is_configured": true, 00:15:06.929 "data_offset": 2048, 00:15:06.929 "data_size": 63488 00:15:06.929 }, 00:15:06.929 { 00:15:06.929 "name": "BaseBdev3", 00:15:06.929 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:06.929 "is_configured": true, 00:15:06.929 "data_offset": 2048, 00:15:06.929 "data_size": 63488 00:15:06.929 } 00:15:06.929 ] 00:15:06.929 }' 00:15:06.929 02:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.929 02:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.929 02:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.929 02:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.929 02:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:07.499 [2024-10-13 02:29:25.883226] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:07.499 [2024-10-13 02:29:25.883349] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:07.499 [2024-10-13 02:29:25.883481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.070 "name": "raid_bdev1", 00:15:08.070 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:08.070 "strip_size_kb": 64, 00:15:08.070 "state": "online", 00:15:08.070 "raid_level": "raid5f", 00:15:08.070 "superblock": true, 00:15:08.070 "num_base_bdevs": 3, 00:15:08.070 "num_base_bdevs_discovered": 3, 00:15:08.070 "num_base_bdevs_operational": 3, 00:15:08.070 "base_bdevs_list": [ 00:15:08.070 { 00:15:08.070 "name": "spare", 00:15:08.070 "uuid": "3235f4fb-609c-5ff5-9722-f779fbe7f1d6", 00:15:08.070 "is_configured": true, 00:15:08.070 "data_offset": 2048, 00:15:08.070 "data_size": 63488 00:15:08.070 }, 00:15:08.070 { 00:15:08.070 "name": "BaseBdev2", 00:15:08.070 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:08.070 "is_configured": true, 00:15:08.070 "data_offset": 2048, 00:15:08.070 "data_size": 63488 00:15:08.070 }, 00:15:08.070 { 00:15:08.070 "name": "BaseBdev3", 00:15:08.070 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:08.070 "is_configured": true, 00:15:08.070 "data_offset": 2048, 00:15:08.070 "data_size": 63488 00:15:08.070 } 00:15:08.070 ] 00:15:08.070 }' 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.070 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.330 "name": "raid_bdev1", 00:15:08.330 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:08.330 "strip_size_kb": 64, 00:15:08.330 "state": "online", 00:15:08.330 "raid_level": "raid5f", 00:15:08.330 "superblock": true, 00:15:08.330 "num_base_bdevs": 3, 00:15:08.330 "num_base_bdevs_discovered": 3, 00:15:08.330 "num_base_bdevs_operational": 3, 00:15:08.330 "base_bdevs_list": [ 00:15:08.330 { 00:15:08.330 "name": "spare", 00:15:08.330 "uuid": "3235f4fb-609c-5ff5-9722-f779fbe7f1d6", 00:15:08.330 "is_configured": true, 00:15:08.330 "data_offset": 2048, 00:15:08.330 "data_size": 63488 00:15:08.330 }, 00:15:08.330 { 00:15:08.330 "name": "BaseBdev2", 00:15:08.330 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:08.330 "is_configured": true, 00:15:08.330 "data_offset": 2048, 00:15:08.330 "data_size": 63488 00:15:08.330 }, 00:15:08.330 { 00:15:08.330 "name": "BaseBdev3", 00:15:08.330 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:08.330 "is_configured": true, 00:15:08.330 "data_offset": 2048, 00:15:08.330 "data_size": 63488 00:15:08.330 } 00:15:08.330 ] 00:15:08.330 }' 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.330 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.330 "name": "raid_bdev1", 00:15:08.330 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:08.331 "strip_size_kb": 64, 00:15:08.331 "state": "online", 00:15:08.331 "raid_level": "raid5f", 00:15:08.331 "superblock": true, 00:15:08.331 "num_base_bdevs": 3, 00:15:08.331 "num_base_bdevs_discovered": 3, 00:15:08.331 "num_base_bdevs_operational": 3, 00:15:08.331 "base_bdevs_list": [ 00:15:08.331 { 00:15:08.331 "name": "spare", 00:15:08.331 "uuid": "3235f4fb-609c-5ff5-9722-f779fbe7f1d6", 00:15:08.331 "is_configured": true, 00:15:08.331 "data_offset": 2048, 00:15:08.331 "data_size": 63488 00:15:08.331 }, 00:15:08.331 { 00:15:08.331 "name": "BaseBdev2", 00:15:08.331 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:08.331 "is_configured": true, 00:15:08.331 "data_offset": 2048, 00:15:08.331 "data_size": 63488 00:15:08.331 }, 00:15:08.331 { 00:15:08.331 "name": "BaseBdev3", 00:15:08.331 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:08.331 "is_configured": true, 00:15:08.331 "data_offset": 2048, 00:15:08.331 "data_size": 63488 00:15:08.331 } 00:15:08.331 ] 00:15:08.331 }' 00:15:08.331 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.331 02:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.901 [2024-10-13 02:29:27.310289] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:08.901 [2024-10-13 02:29:27.310327] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.901 [2024-10-13 02:29:27.310410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.901 [2024-10-13 02:29:27.310494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.901 [2024-10-13 02:29:27.310503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:08.901 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:08.901 /dev/nbd0 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.161 1+0 records in 00:15:09.161 1+0 records out 00:15:09.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536271 s, 7.6 MB/s 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:09.161 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:09.161 /dev/nbd1 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.421 1+0 records in 00:15:09.421 1+0 records out 00:15:09.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417399 s, 9.8 MB/s 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.421 02:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:09.680 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:09.680 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:09.680 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:09.680 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.680 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.680 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:09.680 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:09.680 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.680 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.680 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:09.940 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:09.940 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:09.940 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:09.940 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.940 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.940 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:09.940 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:09.940 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.940 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:09.940 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:09.940 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.940 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.940 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.940 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:09.940 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.940 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.940 [2024-10-13 02:29:28.428867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:09.940 [2024-10-13 02:29:28.429024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.940 [2024-10-13 02:29:28.429067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:09.941 [2024-10-13 02:29:28.429095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.941 [2024-10-13 02:29:28.431278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.941 [2024-10-13 02:29:28.431365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:09.941 [2024-10-13 02:29:28.431474] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:09.941 [2024-10-13 02:29:28.431536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.941 [2024-10-13 02:29:28.431704] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:09.941 [2024-10-13 02:29:28.431846] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:09.941 spare 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.941 [2024-10-13 02:29:28.531812] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:15:09.941 [2024-10-13 02:29:28.531954] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:09.941 [2024-10-13 02:29:28.532334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043d50 00:15:09.941 [2024-10-13 02:29:28.532849] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:15:09.941 [2024-10-13 02:29:28.532910] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:15:09.941 [2024-10-13 02:29:28.533126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.941 "name": "raid_bdev1", 00:15:09.941 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:09.941 "strip_size_kb": 64, 00:15:09.941 "state": "online", 00:15:09.941 "raid_level": "raid5f", 00:15:09.941 "superblock": true, 00:15:09.941 "num_base_bdevs": 3, 00:15:09.941 "num_base_bdevs_discovered": 3, 00:15:09.941 "num_base_bdevs_operational": 3, 00:15:09.941 "base_bdevs_list": [ 00:15:09.941 { 00:15:09.941 "name": "spare", 00:15:09.941 "uuid": "3235f4fb-609c-5ff5-9722-f779fbe7f1d6", 00:15:09.941 "is_configured": true, 00:15:09.941 "data_offset": 2048, 00:15:09.941 "data_size": 63488 00:15:09.941 }, 00:15:09.941 { 00:15:09.941 "name": "BaseBdev2", 00:15:09.941 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:09.941 "is_configured": true, 00:15:09.941 "data_offset": 2048, 00:15:09.941 "data_size": 63488 00:15:09.941 }, 00:15:09.941 { 00:15:09.941 "name": "BaseBdev3", 00:15:09.941 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:09.941 "is_configured": true, 00:15:09.941 "data_offset": 2048, 00:15:09.941 "data_size": 63488 00:15:09.941 } 00:15:09.941 ] 00:15:09.941 }' 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.941 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.509 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:10.509 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.509 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:10.509 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:10.509 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.509 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.509 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.509 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.509 02:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.509 "name": "raid_bdev1", 00:15:10.509 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:10.509 "strip_size_kb": 64, 00:15:10.509 "state": "online", 00:15:10.509 "raid_level": "raid5f", 00:15:10.509 "superblock": true, 00:15:10.509 "num_base_bdevs": 3, 00:15:10.509 "num_base_bdevs_discovered": 3, 00:15:10.509 "num_base_bdevs_operational": 3, 00:15:10.509 "base_bdevs_list": [ 00:15:10.509 { 00:15:10.509 "name": "spare", 00:15:10.509 "uuid": "3235f4fb-609c-5ff5-9722-f779fbe7f1d6", 00:15:10.509 "is_configured": true, 00:15:10.509 "data_offset": 2048, 00:15:10.509 "data_size": 63488 00:15:10.509 }, 00:15:10.509 { 00:15:10.509 "name": "BaseBdev2", 00:15:10.509 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:10.509 "is_configured": true, 00:15:10.509 "data_offset": 2048, 00:15:10.509 "data_size": 63488 00:15:10.509 }, 00:15:10.509 { 00:15:10.509 "name": "BaseBdev3", 00:15:10.509 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:10.509 "is_configured": true, 00:15:10.509 "data_offset": 2048, 00:15:10.509 "data_size": 63488 00:15:10.509 } 00:15:10.509 ] 00:15:10.509 }' 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.509 [2024-10-13 02:29:29.172053] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.509 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.510 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.510 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.510 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:10.510 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.510 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.510 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.510 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.510 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.510 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.510 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.510 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.769 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.769 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.769 "name": "raid_bdev1", 00:15:10.769 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:10.769 "strip_size_kb": 64, 00:15:10.769 "state": "online", 00:15:10.769 "raid_level": "raid5f", 00:15:10.769 "superblock": true, 00:15:10.769 "num_base_bdevs": 3, 00:15:10.769 "num_base_bdevs_discovered": 2, 00:15:10.769 "num_base_bdevs_operational": 2, 00:15:10.769 "base_bdevs_list": [ 00:15:10.769 { 00:15:10.769 "name": null, 00:15:10.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.769 "is_configured": false, 00:15:10.769 "data_offset": 0, 00:15:10.769 "data_size": 63488 00:15:10.769 }, 00:15:10.769 { 00:15:10.769 "name": "BaseBdev2", 00:15:10.769 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:10.769 "is_configured": true, 00:15:10.769 "data_offset": 2048, 00:15:10.769 "data_size": 63488 00:15:10.769 }, 00:15:10.769 { 00:15:10.769 "name": "BaseBdev3", 00:15:10.769 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:10.769 "is_configured": true, 00:15:10.769 "data_offset": 2048, 00:15:10.769 "data_size": 63488 00:15:10.769 } 00:15:10.769 ] 00:15:10.769 }' 00:15:10.769 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.769 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.028 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:11.028 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.028 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.028 [2024-10-13 02:29:29.583735] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:11.028 [2024-10-13 02:29:29.584045] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:11.028 [2024-10-13 02:29:29.584105] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:11.028 [2024-10-13 02:29:29.584210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:11.028 [2024-10-13 02:29:29.587906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043e20 00:15:11.028 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.028 02:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:11.028 [2024-10-13 02:29:29.590206] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.135 "name": "raid_bdev1", 00:15:12.135 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:12.135 "strip_size_kb": 64, 00:15:12.135 "state": "online", 00:15:12.135 "raid_level": "raid5f", 00:15:12.135 "superblock": true, 00:15:12.135 "num_base_bdevs": 3, 00:15:12.135 "num_base_bdevs_discovered": 3, 00:15:12.135 "num_base_bdevs_operational": 3, 00:15:12.135 "process": { 00:15:12.135 "type": "rebuild", 00:15:12.135 "target": "spare", 00:15:12.135 "progress": { 00:15:12.135 "blocks": 20480, 00:15:12.135 "percent": 16 00:15:12.135 } 00:15:12.135 }, 00:15:12.135 "base_bdevs_list": [ 00:15:12.135 { 00:15:12.135 "name": "spare", 00:15:12.135 "uuid": "3235f4fb-609c-5ff5-9722-f779fbe7f1d6", 00:15:12.135 "is_configured": true, 00:15:12.135 "data_offset": 2048, 00:15:12.135 "data_size": 63488 00:15:12.135 }, 00:15:12.135 { 00:15:12.135 "name": "BaseBdev2", 00:15:12.135 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:12.135 "is_configured": true, 00:15:12.135 "data_offset": 2048, 00:15:12.135 "data_size": 63488 00:15:12.135 }, 00:15:12.135 { 00:15:12.135 "name": "BaseBdev3", 00:15:12.135 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:12.135 "is_configured": true, 00:15:12.135 "data_offset": 2048, 00:15:12.135 "data_size": 63488 00:15:12.135 } 00:15:12.135 ] 00:15:12.135 }' 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.135 [2024-10-13 02:29:30.747376] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.135 [2024-10-13 02:29:30.798675] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:12.135 [2024-10-13 02:29:30.798785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.135 [2024-10-13 02:29:30.798841] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.135 [2024-10-13 02:29:30.798864] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.135 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.399 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.399 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.399 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.399 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.399 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.399 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.399 "name": "raid_bdev1", 00:15:12.399 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:12.399 "strip_size_kb": 64, 00:15:12.399 "state": "online", 00:15:12.399 "raid_level": "raid5f", 00:15:12.399 "superblock": true, 00:15:12.399 "num_base_bdevs": 3, 00:15:12.399 "num_base_bdevs_discovered": 2, 00:15:12.399 "num_base_bdevs_operational": 2, 00:15:12.399 "base_bdevs_list": [ 00:15:12.399 { 00:15:12.399 "name": null, 00:15:12.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.399 "is_configured": false, 00:15:12.399 "data_offset": 0, 00:15:12.399 "data_size": 63488 00:15:12.399 }, 00:15:12.399 { 00:15:12.399 "name": "BaseBdev2", 00:15:12.399 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:12.399 "is_configured": true, 00:15:12.399 "data_offset": 2048, 00:15:12.399 "data_size": 63488 00:15:12.399 }, 00:15:12.399 { 00:15:12.399 "name": "BaseBdev3", 00:15:12.399 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:12.399 "is_configured": true, 00:15:12.399 "data_offset": 2048, 00:15:12.399 "data_size": 63488 00:15:12.399 } 00:15:12.399 ] 00:15:12.399 }' 00:15:12.399 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.399 02:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.665 02:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:12.665 02:29:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.665 02:29:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.665 [2024-10-13 02:29:31.287284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:12.665 [2024-10-13 02:29:31.287472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.665 [2024-10-13 02:29:31.287501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:12.665 [2024-10-13 02:29:31.287510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.665 [2024-10-13 02:29:31.288037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.665 [2024-10-13 02:29:31.288059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:12.665 [2024-10-13 02:29:31.288156] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:12.665 [2024-10-13 02:29:31.288169] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:12.665 [2024-10-13 02:29:31.288181] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:12.665 [2024-10-13 02:29:31.288203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.665 [2024-10-13 02:29:31.291903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043ef0 00:15:12.665 spare 00:15:12.665 02:29:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.665 02:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:12.665 [2024-10-13 02:29:31.294103] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:14.047 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.047 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.047 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.047 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.047 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.047 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.047 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.047 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.047 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.047 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.047 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.047 "name": "raid_bdev1", 00:15:14.047 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:14.047 "strip_size_kb": 64, 00:15:14.047 "state": "online", 00:15:14.047 "raid_level": "raid5f", 00:15:14.047 "superblock": true, 00:15:14.047 "num_base_bdevs": 3, 00:15:14.047 "num_base_bdevs_discovered": 3, 00:15:14.047 "num_base_bdevs_operational": 3, 00:15:14.047 "process": { 00:15:14.047 "type": "rebuild", 00:15:14.047 "target": "spare", 00:15:14.047 "progress": { 00:15:14.047 "blocks": 20480, 00:15:14.047 "percent": 16 00:15:14.047 } 00:15:14.047 }, 00:15:14.047 "base_bdevs_list": [ 00:15:14.047 { 00:15:14.047 "name": "spare", 00:15:14.047 "uuid": "3235f4fb-609c-5ff5-9722-f779fbe7f1d6", 00:15:14.047 "is_configured": true, 00:15:14.047 "data_offset": 2048, 00:15:14.047 "data_size": 63488 00:15:14.047 }, 00:15:14.047 { 00:15:14.047 "name": "BaseBdev2", 00:15:14.048 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:14.048 "is_configured": true, 00:15:14.048 "data_offset": 2048, 00:15:14.048 "data_size": 63488 00:15:14.048 }, 00:15:14.048 { 00:15:14.048 "name": "BaseBdev3", 00:15:14.048 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:14.048 "is_configured": true, 00:15:14.048 "data_offset": 2048, 00:15:14.048 "data_size": 63488 00:15:14.048 } 00:15:14.048 ] 00:15:14.048 }' 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.048 [2024-10-13 02:29:32.446768] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:14.048 [2024-10-13 02:29:32.502810] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:14.048 [2024-10-13 02:29:32.502907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.048 [2024-10-13 02:29:32.502926] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:14.048 [2024-10-13 02:29:32.502939] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.048 "name": "raid_bdev1", 00:15:14.048 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:14.048 "strip_size_kb": 64, 00:15:14.048 "state": "online", 00:15:14.048 "raid_level": "raid5f", 00:15:14.048 "superblock": true, 00:15:14.048 "num_base_bdevs": 3, 00:15:14.048 "num_base_bdevs_discovered": 2, 00:15:14.048 "num_base_bdevs_operational": 2, 00:15:14.048 "base_bdevs_list": [ 00:15:14.048 { 00:15:14.048 "name": null, 00:15:14.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.048 "is_configured": false, 00:15:14.048 "data_offset": 0, 00:15:14.048 "data_size": 63488 00:15:14.048 }, 00:15:14.048 { 00:15:14.048 "name": "BaseBdev2", 00:15:14.048 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:14.048 "is_configured": true, 00:15:14.048 "data_offset": 2048, 00:15:14.048 "data_size": 63488 00:15:14.048 }, 00:15:14.048 { 00:15:14.048 "name": "BaseBdev3", 00:15:14.048 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:14.048 "is_configured": true, 00:15:14.048 "data_offset": 2048, 00:15:14.048 "data_size": 63488 00:15:14.048 } 00:15:14.048 ] 00:15:14.048 }' 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.048 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.308 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.308 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.308 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.308 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.308 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.308 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.308 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.308 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.308 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.308 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.569 02:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.569 "name": "raid_bdev1", 00:15:14.569 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:14.569 "strip_size_kb": 64, 00:15:14.569 "state": "online", 00:15:14.569 "raid_level": "raid5f", 00:15:14.569 "superblock": true, 00:15:14.569 "num_base_bdevs": 3, 00:15:14.569 "num_base_bdevs_discovered": 2, 00:15:14.569 "num_base_bdevs_operational": 2, 00:15:14.569 "base_bdevs_list": [ 00:15:14.569 { 00:15:14.569 "name": null, 00:15:14.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.569 "is_configured": false, 00:15:14.569 "data_offset": 0, 00:15:14.569 "data_size": 63488 00:15:14.569 }, 00:15:14.569 { 00:15:14.569 "name": "BaseBdev2", 00:15:14.569 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:14.569 "is_configured": true, 00:15:14.569 "data_offset": 2048, 00:15:14.569 "data_size": 63488 00:15:14.569 }, 00:15:14.569 { 00:15:14.569 "name": "BaseBdev3", 00:15:14.569 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:14.569 "is_configured": true, 00:15:14.569 "data_offset": 2048, 00:15:14.569 "data_size": 63488 00:15:14.569 } 00:15:14.569 ] 00:15:14.569 }' 00:15:14.569 02:29:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.569 02:29:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.569 02:29:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.569 02:29:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.569 02:29:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:14.569 02:29:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.569 02:29:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.569 02:29:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.569 02:29:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:14.569 02:29:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.569 02:29:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.569 [2024-10-13 02:29:33.091273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:14.569 [2024-10-13 02:29:33.091450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.569 [2024-10-13 02:29:33.091497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:14.569 [2024-10-13 02:29:33.091511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.569 [2024-10-13 02:29:33.091978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.569 [2024-10-13 02:29:33.092008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:14.569 [2024-10-13 02:29:33.092092] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:14.569 [2024-10-13 02:29:33.092108] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:14.569 [2024-10-13 02:29:33.092117] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:14.569 [2024-10-13 02:29:33.092130] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:14.569 BaseBdev1 00:15:14.569 02:29:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.569 02:29:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:15.509 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:15.509 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.509 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.509 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.509 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.509 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.509 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.509 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.509 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.509 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.509 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.509 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.509 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.509 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.509 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.509 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.509 "name": "raid_bdev1", 00:15:15.509 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:15.509 "strip_size_kb": 64, 00:15:15.509 "state": "online", 00:15:15.509 "raid_level": "raid5f", 00:15:15.509 "superblock": true, 00:15:15.509 "num_base_bdevs": 3, 00:15:15.509 "num_base_bdevs_discovered": 2, 00:15:15.509 "num_base_bdevs_operational": 2, 00:15:15.509 "base_bdevs_list": [ 00:15:15.509 { 00:15:15.509 "name": null, 00:15:15.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.509 "is_configured": false, 00:15:15.509 "data_offset": 0, 00:15:15.509 "data_size": 63488 00:15:15.509 }, 00:15:15.509 { 00:15:15.509 "name": "BaseBdev2", 00:15:15.509 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:15.509 "is_configured": true, 00:15:15.509 "data_offset": 2048, 00:15:15.509 "data_size": 63488 00:15:15.509 }, 00:15:15.509 { 00:15:15.509 "name": "BaseBdev3", 00:15:15.509 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:15.509 "is_configured": true, 00:15:15.509 "data_offset": 2048, 00:15:15.509 "data_size": 63488 00:15:15.509 } 00:15:15.509 ] 00:15:15.509 }' 00:15:15.509 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.509 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.079 "name": "raid_bdev1", 00:15:16.079 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:16.079 "strip_size_kb": 64, 00:15:16.079 "state": "online", 00:15:16.079 "raid_level": "raid5f", 00:15:16.079 "superblock": true, 00:15:16.079 "num_base_bdevs": 3, 00:15:16.079 "num_base_bdevs_discovered": 2, 00:15:16.079 "num_base_bdevs_operational": 2, 00:15:16.079 "base_bdevs_list": [ 00:15:16.079 { 00:15:16.079 "name": null, 00:15:16.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.079 "is_configured": false, 00:15:16.079 "data_offset": 0, 00:15:16.079 "data_size": 63488 00:15:16.079 }, 00:15:16.079 { 00:15:16.079 "name": "BaseBdev2", 00:15:16.079 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:16.079 "is_configured": true, 00:15:16.079 "data_offset": 2048, 00:15:16.079 "data_size": 63488 00:15:16.079 }, 00:15:16.079 { 00:15:16.079 "name": "BaseBdev3", 00:15:16.079 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:16.079 "is_configured": true, 00:15:16.079 "data_offset": 2048, 00:15:16.079 "data_size": 63488 00:15:16.079 } 00:15:16.079 ] 00:15:16.079 }' 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.079 [2024-10-13 02:29:34.676675] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.079 [2024-10-13 02:29:34.676999] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:16.079 [2024-10-13 02:29:34.677064] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:16.079 request: 00:15:16.079 { 00:15:16.079 "base_bdev": "BaseBdev1", 00:15:16.079 "raid_bdev": "raid_bdev1", 00:15:16.079 "method": "bdev_raid_add_base_bdev", 00:15:16.079 "req_id": 1 00:15:16.079 } 00:15:16.079 Got JSON-RPC error response 00:15:16.079 response: 00:15:16.079 { 00:15:16.079 "code": -22, 00:15:16.079 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:16.079 } 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:16.079 02:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:17.020 02:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:17.020 02:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.020 02:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.020 02:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.020 02:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.020 02:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.020 02:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.020 02:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.020 02:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.020 02:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.020 02:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.020 02:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.020 02:29:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.020 02:29:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.280 02:29:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.280 02:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.280 "name": "raid_bdev1", 00:15:17.280 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:17.280 "strip_size_kb": 64, 00:15:17.280 "state": "online", 00:15:17.280 "raid_level": "raid5f", 00:15:17.280 "superblock": true, 00:15:17.280 "num_base_bdevs": 3, 00:15:17.280 "num_base_bdevs_discovered": 2, 00:15:17.280 "num_base_bdevs_operational": 2, 00:15:17.280 "base_bdevs_list": [ 00:15:17.280 { 00:15:17.280 "name": null, 00:15:17.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.280 "is_configured": false, 00:15:17.280 "data_offset": 0, 00:15:17.280 "data_size": 63488 00:15:17.280 }, 00:15:17.280 { 00:15:17.280 "name": "BaseBdev2", 00:15:17.280 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:17.280 "is_configured": true, 00:15:17.280 "data_offset": 2048, 00:15:17.280 "data_size": 63488 00:15:17.280 }, 00:15:17.280 { 00:15:17.280 "name": "BaseBdev3", 00:15:17.280 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:17.280 "is_configured": true, 00:15:17.280 "data_offset": 2048, 00:15:17.280 "data_size": 63488 00:15:17.280 } 00:15:17.280 ] 00:15:17.280 }' 00:15:17.280 02:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.280 02:29:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.540 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.540 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.540 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.540 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.540 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.540 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.540 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.540 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.540 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.540 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.540 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.540 "name": "raid_bdev1", 00:15:17.540 "uuid": "03d380ba-9471-488d-b570-66828fe25c27", 00:15:17.540 "strip_size_kb": 64, 00:15:17.540 "state": "online", 00:15:17.540 "raid_level": "raid5f", 00:15:17.540 "superblock": true, 00:15:17.540 "num_base_bdevs": 3, 00:15:17.540 "num_base_bdevs_discovered": 2, 00:15:17.540 "num_base_bdevs_operational": 2, 00:15:17.540 "base_bdevs_list": [ 00:15:17.540 { 00:15:17.540 "name": null, 00:15:17.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.540 "is_configured": false, 00:15:17.540 "data_offset": 0, 00:15:17.540 "data_size": 63488 00:15:17.540 }, 00:15:17.540 { 00:15:17.540 "name": "BaseBdev2", 00:15:17.540 "uuid": "c2e3b18d-bc4f-5279-8c36-ae577bbdca33", 00:15:17.540 "is_configured": true, 00:15:17.540 "data_offset": 2048, 00:15:17.540 "data_size": 63488 00:15:17.540 }, 00:15:17.540 { 00:15:17.540 "name": "BaseBdev3", 00:15:17.540 "uuid": "ba239189-d9c0-5684-9ff2-db51d7b4f43e", 00:15:17.540 "is_configured": true, 00:15:17.540 "data_offset": 2048, 00:15:17.540 "data_size": 63488 00:15:17.540 } 00:15:17.540 ] 00:15:17.540 }' 00:15:17.540 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.540 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.540 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.800 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.800 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92448 00:15:17.800 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92448 ']' 00:15:17.800 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 92448 00:15:17.800 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:17.800 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:17.800 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92448 00:15:17.800 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:17.800 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:17.800 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92448' 00:15:17.800 killing process with pid 92448 00:15:17.800 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 92448 00:15:17.800 Received shutdown signal, test time was about 60.000000 seconds 00:15:17.800 00:15:17.800 Latency(us) 00:15:17.800 [2024-10-13T02:29:36.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.800 [2024-10-13T02:29:36.484Z] =================================================================================================================== 00:15:17.800 [2024-10-13T02:29:36.484Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:17.800 [2024-10-13 02:29:36.267191] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:17.801 [2024-10-13 02:29:36.267313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.801 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 92448 00:15:17.801 [2024-10-13 02:29:36.267378] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.801 [2024-10-13 02:29:36.267388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:15:17.801 [2024-10-13 02:29:36.308821] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.061 ************************************ 00:15:18.061 END TEST raid5f_rebuild_test_sb 00:15:18.061 ************************************ 00:15:18.061 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:18.061 00:15:18.061 real 0m21.492s 00:15:18.061 user 0m27.951s 00:15:18.061 sys 0m2.730s 00:15:18.061 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:18.061 02:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.061 02:29:36 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:18.061 02:29:36 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:18.061 02:29:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:18.061 02:29:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:18.061 02:29:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.061 ************************************ 00:15:18.061 START TEST raid5f_state_function_test 00:15:18.061 ************************************ 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:18.061 Process raid pid: 93180 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93180 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93180' 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93180 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 93180 ']' 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:18.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:18.061 02:29:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.061 [2024-10-13 02:29:36.689961] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:18.061 [2024-10-13 02:29:36.690171] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.321 [2024-10-13 02:29:36.834051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.321 [2024-10-13 02:29:36.885439] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.321 [2024-10-13 02:29:36.928053] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.321 [2024-10-13 02:29:36.928168] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.891 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:18.891 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:18.892 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:18.892 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.892 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.152 [2024-10-13 02:29:37.581816] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.152 [2024-10-13 02:29:37.581907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.152 [2024-10-13 02:29:37.581927] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.152 [2024-10-13 02:29:37.581938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.152 [2024-10-13 02:29:37.581944] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:19.152 [2024-10-13 02:29:37.581957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:19.152 [2024-10-13 02:29:37.581964] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:19.152 [2024-10-13 02:29:37.581972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.152 "name": "Existed_Raid", 00:15:19.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.152 "strip_size_kb": 64, 00:15:19.152 "state": "configuring", 00:15:19.152 "raid_level": "raid5f", 00:15:19.152 "superblock": false, 00:15:19.152 "num_base_bdevs": 4, 00:15:19.152 "num_base_bdevs_discovered": 0, 00:15:19.152 "num_base_bdevs_operational": 4, 00:15:19.152 "base_bdevs_list": [ 00:15:19.152 { 00:15:19.152 "name": "BaseBdev1", 00:15:19.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.152 "is_configured": false, 00:15:19.152 "data_offset": 0, 00:15:19.152 "data_size": 0 00:15:19.152 }, 00:15:19.152 { 00:15:19.152 "name": "BaseBdev2", 00:15:19.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.152 "is_configured": false, 00:15:19.152 "data_offset": 0, 00:15:19.152 "data_size": 0 00:15:19.152 }, 00:15:19.152 { 00:15:19.152 "name": "BaseBdev3", 00:15:19.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.152 "is_configured": false, 00:15:19.152 "data_offset": 0, 00:15:19.152 "data_size": 0 00:15:19.152 }, 00:15:19.152 { 00:15:19.152 "name": "BaseBdev4", 00:15:19.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.152 "is_configured": false, 00:15:19.152 "data_offset": 0, 00:15:19.152 "data_size": 0 00:15:19.152 } 00:15:19.152 ] 00:15:19.152 }' 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.152 02:29:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.413 [2024-10-13 02:29:38.060819] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:19.413 [2024-10-13 02:29:38.060897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.413 [2024-10-13 02:29:38.072794] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.413 [2024-10-13 02:29:38.072852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.413 [2024-10-13 02:29:38.072876] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.413 [2024-10-13 02:29:38.072898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.413 [2024-10-13 02:29:38.072904] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:19.413 [2024-10-13 02:29:38.072913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:19.413 [2024-10-13 02:29:38.072919] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:19.413 [2024-10-13 02:29:38.072928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.413 [2024-10-13 02:29:38.089598] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.413 BaseBdev1 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.413 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.673 [ 00:15:19.673 { 00:15:19.673 "name": "BaseBdev1", 00:15:19.673 "aliases": [ 00:15:19.673 "fc61bffc-7d6a-4b71-8ec4-ab4b9d75f245" 00:15:19.673 ], 00:15:19.673 "product_name": "Malloc disk", 00:15:19.673 "block_size": 512, 00:15:19.673 "num_blocks": 65536, 00:15:19.673 "uuid": "fc61bffc-7d6a-4b71-8ec4-ab4b9d75f245", 00:15:19.673 "assigned_rate_limits": { 00:15:19.673 "rw_ios_per_sec": 0, 00:15:19.673 "rw_mbytes_per_sec": 0, 00:15:19.673 "r_mbytes_per_sec": 0, 00:15:19.673 "w_mbytes_per_sec": 0 00:15:19.673 }, 00:15:19.673 "claimed": true, 00:15:19.673 "claim_type": "exclusive_write", 00:15:19.673 "zoned": false, 00:15:19.673 "supported_io_types": { 00:15:19.673 "read": true, 00:15:19.673 "write": true, 00:15:19.673 "unmap": true, 00:15:19.673 "flush": true, 00:15:19.673 "reset": true, 00:15:19.673 "nvme_admin": false, 00:15:19.673 "nvme_io": false, 00:15:19.673 "nvme_io_md": false, 00:15:19.673 "write_zeroes": true, 00:15:19.673 "zcopy": true, 00:15:19.673 "get_zone_info": false, 00:15:19.673 "zone_management": false, 00:15:19.673 "zone_append": false, 00:15:19.673 "compare": false, 00:15:19.673 "compare_and_write": false, 00:15:19.673 "abort": true, 00:15:19.673 "seek_hole": false, 00:15:19.673 "seek_data": false, 00:15:19.673 "copy": true, 00:15:19.673 "nvme_iov_md": false 00:15:19.673 }, 00:15:19.673 "memory_domains": [ 00:15:19.673 { 00:15:19.673 "dma_device_id": "system", 00:15:19.673 "dma_device_type": 1 00:15:19.673 }, 00:15:19.673 { 00:15:19.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.673 "dma_device_type": 2 00:15:19.673 } 00:15:19.673 ], 00:15:19.673 "driver_specific": {} 00:15:19.673 } 00:15:19.673 ] 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.673 "name": "Existed_Raid", 00:15:19.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.673 "strip_size_kb": 64, 00:15:19.673 "state": "configuring", 00:15:19.673 "raid_level": "raid5f", 00:15:19.673 "superblock": false, 00:15:19.673 "num_base_bdevs": 4, 00:15:19.673 "num_base_bdevs_discovered": 1, 00:15:19.673 "num_base_bdevs_operational": 4, 00:15:19.673 "base_bdevs_list": [ 00:15:19.673 { 00:15:19.673 "name": "BaseBdev1", 00:15:19.673 "uuid": "fc61bffc-7d6a-4b71-8ec4-ab4b9d75f245", 00:15:19.673 "is_configured": true, 00:15:19.673 "data_offset": 0, 00:15:19.673 "data_size": 65536 00:15:19.673 }, 00:15:19.673 { 00:15:19.673 "name": "BaseBdev2", 00:15:19.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.673 "is_configured": false, 00:15:19.673 "data_offset": 0, 00:15:19.673 "data_size": 0 00:15:19.673 }, 00:15:19.673 { 00:15:19.673 "name": "BaseBdev3", 00:15:19.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.673 "is_configured": false, 00:15:19.673 "data_offset": 0, 00:15:19.673 "data_size": 0 00:15:19.673 }, 00:15:19.673 { 00:15:19.673 "name": "BaseBdev4", 00:15:19.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.673 "is_configured": false, 00:15:19.673 "data_offset": 0, 00:15:19.673 "data_size": 0 00:15:19.673 } 00:15:19.673 ] 00:15:19.673 }' 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.673 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.933 [2024-10-13 02:29:38.556832] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:19.933 [2024-10-13 02:29:38.556973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.933 [2024-10-13 02:29:38.568899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.933 [2024-10-13 02:29:38.570793] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.933 [2024-10-13 02:29:38.570881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.933 [2024-10-13 02:29:38.570913] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:19.933 [2024-10-13 02:29:38.570936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:19.933 [2024-10-13 02:29:38.570954] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:19.933 [2024-10-13 02:29:38.570973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.933 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.194 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.194 "name": "Existed_Raid", 00:15:20.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.194 "strip_size_kb": 64, 00:15:20.194 "state": "configuring", 00:15:20.194 "raid_level": "raid5f", 00:15:20.194 "superblock": false, 00:15:20.194 "num_base_bdevs": 4, 00:15:20.194 "num_base_bdevs_discovered": 1, 00:15:20.194 "num_base_bdevs_operational": 4, 00:15:20.194 "base_bdevs_list": [ 00:15:20.194 { 00:15:20.194 "name": "BaseBdev1", 00:15:20.194 "uuid": "fc61bffc-7d6a-4b71-8ec4-ab4b9d75f245", 00:15:20.194 "is_configured": true, 00:15:20.194 "data_offset": 0, 00:15:20.194 "data_size": 65536 00:15:20.194 }, 00:15:20.194 { 00:15:20.194 "name": "BaseBdev2", 00:15:20.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.194 "is_configured": false, 00:15:20.194 "data_offset": 0, 00:15:20.194 "data_size": 0 00:15:20.194 }, 00:15:20.194 { 00:15:20.194 "name": "BaseBdev3", 00:15:20.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.194 "is_configured": false, 00:15:20.194 "data_offset": 0, 00:15:20.194 "data_size": 0 00:15:20.194 }, 00:15:20.194 { 00:15:20.194 "name": "BaseBdev4", 00:15:20.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.194 "is_configured": false, 00:15:20.194 "data_offset": 0, 00:15:20.194 "data_size": 0 00:15:20.194 } 00:15:20.194 ] 00:15:20.194 }' 00:15:20.194 02:29:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.194 02:29:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.454 [2024-10-13 02:29:39.039380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.454 BaseBdev2 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.454 [ 00:15:20.454 { 00:15:20.454 "name": "BaseBdev2", 00:15:20.454 "aliases": [ 00:15:20.454 "ff3b3577-8038-4b26-beb5-b1a2d41463f6" 00:15:20.454 ], 00:15:20.454 "product_name": "Malloc disk", 00:15:20.454 "block_size": 512, 00:15:20.454 "num_blocks": 65536, 00:15:20.454 "uuid": "ff3b3577-8038-4b26-beb5-b1a2d41463f6", 00:15:20.454 "assigned_rate_limits": { 00:15:20.454 "rw_ios_per_sec": 0, 00:15:20.454 "rw_mbytes_per_sec": 0, 00:15:20.454 "r_mbytes_per_sec": 0, 00:15:20.454 "w_mbytes_per_sec": 0 00:15:20.454 }, 00:15:20.454 "claimed": true, 00:15:20.454 "claim_type": "exclusive_write", 00:15:20.454 "zoned": false, 00:15:20.454 "supported_io_types": { 00:15:20.454 "read": true, 00:15:20.454 "write": true, 00:15:20.454 "unmap": true, 00:15:20.454 "flush": true, 00:15:20.454 "reset": true, 00:15:20.454 "nvme_admin": false, 00:15:20.454 "nvme_io": false, 00:15:20.454 "nvme_io_md": false, 00:15:20.454 "write_zeroes": true, 00:15:20.454 "zcopy": true, 00:15:20.454 "get_zone_info": false, 00:15:20.454 "zone_management": false, 00:15:20.454 "zone_append": false, 00:15:20.454 "compare": false, 00:15:20.454 "compare_and_write": false, 00:15:20.454 "abort": true, 00:15:20.454 "seek_hole": false, 00:15:20.454 "seek_data": false, 00:15:20.454 "copy": true, 00:15:20.454 "nvme_iov_md": false 00:15:20.454 }, 00:15:20.454 "memory_domains": [ 00:15:20.454 { 00:15:20.454 "dma_device_id": "system", 00:15:20.454 "dma_device_type": 1 00:15:20.454 }, 00:15:20.454 { 00:15:20.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.454 "dma_device_type": 2 00:15:20.454 } 00:15:20.454 ], 00:15:20.454 "driver_specific": {} 00:15:20.454 } 00:15:20.454 ] 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.454 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.454 "name": "Existed_Raid", 00:15:20.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.454 "strip_size_kb": 64, 00:15:20.455 "state": "configuring", 00:15:20.455 "raid_level": "raid5f", 00:15:20.455 "superblock": false, 00:15:20.455 "num_base_bdevs": 4, 00:15:20.455 "num_base_bdevs_discovered": 2, 00:15:20.455 "num_base_bdevs_operational": 4, 00:15:20.455 "base_bdevs_list": [ 00:15:20.455 { 00:15:20.455 "name": "BaseBdev1", 00:15:20.455 "uuid": "fc61bffc-7d6a-4b71-8ec4-ab4b9d75f245", 00:15:20.455 "is_configured": true, 00:15:20.455 "data_offset": 0, 00:15:20.455 "data_size": 65536 00:15:20.455 }, 00:15:20.455 { 00:15:20.455 "name": "BaseBdev2", 00:15:20.455 "uuid": "ff3b3577-8038-4b26-beb5-b1a2d41463f6", 00:15:20.455 "is_configured": true, 00:15:20.455 "data_offset": 0, 00:15:20.455 "data_size": 65536 00:15:20.455 }, 00:15:20.455 { 00:15:20.455 "name": "BaseBdev3", 00:15:20.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.455 "is_configured": false, 00:15:20.455 "data_offset": 0, 00:15:20.455 "data_size": 0 00:15:20.455 }, 00:15:20.455 { 00:15:20.455 "name": "BaseBdev4", 00:15:20.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.455 "is_configured": false, 00:15:20.455 "data_offset": 0, 00:15:20.455 "data_size": 0 00:15:20.455 } 00:15:20.455 ] 00:15:20.455 }' 00:15:20.715 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.715 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.976 [2024-10-13 02:29:39.537528] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.976 BaseBdev3 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.976 [ 00:15:20.976 { 00:15:20.976 "name": "BaseBdev3", 00:15:20.976 "aliases": [ 00:15:20.976 "f0f69a95-4bd4-4628-b2df-2efcc114919c" 00:15:20.976 ], 00:15:20.976 "product_name": "Malloc disk", 00:15:20.976 "block_size": 512, 00:15:20.976 "num_blocks": 65536, 00:15:20.976 "uuid": "f0f69a95-4bd4-4628-b2df-2efcc114919c", 00:15:20.976 "assigned_rate_limits": { 00:15:20.976 "rw_ios_per_sec": 0, 00:15:20.976 "rw_mbytes_per_sec": 0, 00:15:20.976 "r_mbytes_per_sec": 0, 00:15:20.976 "w_mbytes_per_sec": 0 00:15:20.976 }, 00:15:20.976 "claimed": true, 00:15:20.976 "claim_type": "exclusive_write", 00:15:20.976 "zoned": false, 00:15:20.976 "supported_io_types": { 00:15:20.976 "read": true, 00:15:20.976 "write": true, 00:15:20.976 "unmap": true, 00:15:20.976 "flush": true, 00:15:20.976 "reset": true, 00:15:20.976 "nvme_admin": false, 00:15:20.976 "nvme_io": false, 00:15:20.976 "nvme_io_md": false, 00:15:20.976 "write_zeroes": true, 00:15:20.976 "zcopy": true, 00:15:20.976 "get_zone_info": false, 00:15:20.976 "zone_management": false, 00:15:20.976 "zone_append": false, 00:15:20.976 "compare": false, 00:15:20.976 "compare_and_write": false, 00:15:20.976 "abort": true, 00:15:20.976 "seek_hole": false, 00:15:20.976 "seek_data": false, 00:15:20.976 "copy": true, 00:15:20.976 "nvme_iov_md": false 00:15:20.976 }, 00:15:20.976 "memory_domains": [ 00:15:20.976 { 00:15:20.976 "dma_device_id": "system", 00:15:20.976 "dma_device_type": 1 00:15:20.976 }, 00:15:20.976 { 00:15:20.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.976 "dma_device_type": 2 00:15:20.976 } 00:15:20.976 ], 00:15:20.976 "driver_specific": {} 00:15:20.976 } 00:15:20.976 ] 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.976 "name": "Existed_Raid", 00:15:20.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.976 "strip_size_kb": 64, 00:15:20.976 "state": "configuring", 00:15:20.976 "raid_level": "raid5f", 00:15:20.976 "superblock": false, 00:15:20.976 "num_base_bdevs": 4, 00:15:20.976 "num_base_bdevs_discovered": 3, 00:15:20.976 "num_base_bdevs_operational": 4, 00:15:20.976 "base_bdevs_list": [ 00:15:20.976 { 00:15:20.976 "name": "BaseBdev1", 00:15:20.976 "uuid": "fc61bffc-7d6a-4b71-8ec4-ab4b9d75f245", 00:15:20.976 "is_configured": true, 00:15:20.976 "data_offset": 0, 00:15:20.976 "data_size": 65536 00:15:20.976 }, 00:15:20.976 { 00:15:20.976 "name": "BaseBdev2", 00:15:20.976 "uuid": "ff3b3577-8038-4b26-beb5-b1a2d41463f6", 00:15:20.976 "is_configured": true, 00:15:20.976 "data_offset": 0, 00:15:20.976 "data_size": 65536 00:15:20.976 }, 00:15:20.976 { 00:15:20.976 "name": "BaseBdev3", 00:15:20.976 "uuid": "f0f69a95-4bd4-4628-b2df-2efcc114919c", 00:15:20.976 "is_configured": true, 00:15:20.976 "data_offset": 0, 00:15:20.976 "data_size": 65536 00:15:20.976 }, 00:15:20.976 { 00:15:20.976 "name": "BaseBdev4", 00:15:20.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.976 "is_configured": false, 00:15:20.976 "data_offset": 0, 00:15:20.976 "data_size": 0 00:15:20.976 } 00:15:20.976 ] 00:15:20.976 }' 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.976 02:29:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.547 [2024-10-13 02:29:40.019719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:21.547 [2024-10-13 02:29:40.019779] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:21.547 [2024-10-13 02:29:40.019787] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:21.547 [2024-10-13 02:29:40.020061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:21.547 [2024-10-13 02:29:40.020514] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:21.547 [2024-10-13 02:29:40.020535] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:21.547 [2024-10-13 02:29:40.020739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.547 BaseBdev4 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.547 [ 00:15:21.547 { 00:15:21.547 "name": "BaseBdev4", 00:15:21.547 "aliases": [ 00:15:21.547 "0cfb123f-9102-4270-86da-eaa23927fafd" 00:15:21.547 ], 00:15:21.547 "product_name": "Malloc disk", 00:15:21.547 "block_size": 512, 00:15:21.547 "num_blocks": 65536, 00:15:21.547 "uuid": "0cfb123f-9102-4270-86da-eaa23927fafd", 00:15:21.547 "assigned_rate_limits": { 00:15:21.547 "rw_ios_per_sec": 0, 00:15:21.547 "rw_mbytes_per_sec": 0, 00:15:21.547 "r_mbytes_per_sec": 0, 00:15:21.547 "w_mbytes_per_sec": 0 00:15:21.547 }, 00:15:21.547 "claimed": true, 00:15:21.547 "claim_type": "exclusive_write", 00:15:21.547 "zoned": false, 00:15:21.547 "supported_io_types": { 00:15:21.547 "read": true, 00:15:21.547 "write": true, 00:15:21.547 "unmap": true, 00:15:21.547 "flush": true, 00:15:21.547 "reset": true, 00:15:21.547 "nvme_admin": false, 00:15:21.547 "nvme_io": false, 00:15:21.547 "nvme_io_md": false, 00:15:21.547 "write_zeroes": true, 00:15:21.547 "zcopy": true, 00:15:21.547 "get_zone_info": false, 00:15:21.547 "zone_management": false, 00:15:21.547 "zone_append": false, 00:15:21.547 "compare": false, 00:15:21.547 "compare_and_write": false, 00:15:21.547 "abort": true, 00:15:21.547 "seek_hole": false, 00:15:21.547 "seek_data": false, 00:15:21.547 "copy": true, 00:15:21.547 "nvme_iov_md": false 00:15:21.547 }, 00:15:21.547 "memory_domains": [ 00:15:21.547 { 00:15:21.547 "dma_device_id": "system", 00:15:21.547 "dma_device_type": 1 00:15:21.547 }, 00:15:21.547 { 00:15:21.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.547 "dma_device_type": 2 00:15:21.547 } 00:15:21.547 ], 00:15:21.547 "driver_specific": {} 00:15:21.547 } 00:15:21.547 ] 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.547 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.548 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.548 "name": "Existed_Raid", 00:15:21.548 "uuid": "4e9c29da-45dd-4ad9-a3b5-1a10266cc5f9", 00:15:21.548 "strip_size_kb": 64, 00:15:21.548 "state": "online", 00:15:21.548 "raid_level": "raid5f", 00:15:21.548 "superblock": false, 00:15:21.548 "num_base_bdevs": 4, 00:15:21.548 "num_base_bdevs_discovered": 4, 00:15:21.548 "num_base_bdevs_operational": 4, 00:15:21.548 "base_bdevs_list": [ 00:15:21.548 { 00:15:21.548 "name": "BaseBdev1", 00:15:21.548 "uuid": "fc61bffc-7d6a-4b71-8ec4-ab4b9d75f245", 00:15:21.548 "is_configured": true, 00:15:21.548 "data_offset": 0, 00:15:21.548 "data_size": 65536 00:15:21.548 }, 00:15:21.548 { 00:15:21.548 "name": "BaseBdev2", 00:15:21.548 "uuid": "ff3b3577-8038-4b26-beb5-b1a2d41463f6", 00:15:21.548 "is_configured": true, 00:15:21.548 "data_offset": 0, 00:15:21.548 "data_size": 65536 00:15:21.548 }, 00:15:21.548 { 00:15:21.548 "name": "BaseBdev3", 00:15:21.548 "uuid": "f0f69a95-4bd4-4628-b2df-2efcc114919c", 00:15:21.548 "is_configured": true, 00:15:21.548 "data_offset": 0, 00:15:21.548 "data_size": 65536 00:15:21.548 }, 00:15:21.548 { 00:15:21.548 "name": "BaseBdev4", 00:15:21.548 "uuid": "0cfb123f-9102-4270-86da-eaa23927fafd", 00:15:21.548 "is_configured": true, 00:15:21.548 "data_offset": 0, 00:15:21.548 "data_size": 65536 00:15:21.548 } 00:15:21.548 ] 00:15:21.548 }' 00:15:21.548 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.548 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.118 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:22.118 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.119 [2024-10-13 02:29:40.531414] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:22.119 "name": "Existed_Raid", 00:15:22.119 "aliases": [ 00:15:22.119 "4e9c29da-45dd-4ad9-a3b5-1a10266cc5f9" 00:15:22.119 ], 00:15:22.119 "product_name": "Raid Volume", 00:15:22.119 "block_size": 512, 00:15:22.119 "num_blocks": 196608, 00:15:22.119 "uuid": "4e9c29da-45dd-4ad9-a3b5-1a10266cc5f9", 00:15:22.119 "assigned_rate_limits": { 00:15:22.119 "rw_ios_per_sec": 0, 00:15:22.119 "rw_mbytes_per_sec": 0, 00:15:22.119 "r_mbytes_per_sec": 0, 00:15:22.119 "w_mbytes_per_sec": 0 00:15:22.119 }, 00:15:22.119 "claimed": false, 00:15:22.119 "zoned": false, 00:15:22.119 "supported_io_types": { 00:15:22.119 "read": true, 00:15:22.119 "write": true, 00:15:22.119 "unmap": false, 00:15:22.119 "flush": false, 00:15:22.119 "reset": true, 00:15:22.119 "nvme_admin": false, 00:15:22.119 "nvme_io": false, 00:15:22.119 "nvme_io_md": false, 00:15:22.119 "write_zeroes": true, 00:15:22.119 "zcopy": false, 00:15:22.119 "get_zone_info": false, 00:15:22.119 "zone_management": false, 00:15:22.119 "zone_append": false, 00:15:22.119 "compare": false, 00:15:22.119 "compare_and_write": false, 00:15:22.119 "abort": false, 00:15:22.119 "seek_hole": false, 00:15:22.119 "seek_data": false, 00:15:22.119 "copy": false, 00:15:22.119 "nvme_iov_md": false 00:15:22.119 }, 00:15:22.119 "driver_specific": { 00:15:22.119 "raid": { 00:15:22.119 "uuid": "4e9c29da-45dd-4ad9-a3b5-1a10266cc5f9", 00:15:22.119 "strip_size_kb": 64, 00:15:22.119 "state": "online", 00:15:22.119 "raid_level": "raid5f", 00:15:22.119 "superblock": false, 00:15:22.119 "num_base_bdevs": 4, 00:15:22.119 "num_base_bdevs_discovered": 4, 00:15:22.119 "num_base_bdevs_operational": 4, 00:15:22.119 "base_bdevs_list": [ 00:15:22.119 { 00:15:22.119 "name": "BaseBdev1", 00:15:22.119 "uuid": "fc61bffc-7d6a-4b71-8ec4-ab4b9d75f245", 00:15:22.119 "is_configured": true, 00:15:22.119 "data_offset": 0, 00:15:22.119 "data_size": 65536 00:15:22.119 }, 00:15:22.119 { 00:15:22.119 "name": "BaseBdev2", 00:15:22.119 "uuid": "ff3b3577-8038-4b26-beb5-b1a2d41463f6", 00:15:22.119 "is_configured": true, 00:15:22.119 "data_offset": 0, 00:15:22.119 "data_size": 65536 00:15:22.119 }, 00:15:22.119 { 00:15:22.119 "name": "BaseBdev3", 00:15:22.119 "uuid": "f0f69a95-4bd4-4628-b2df-2efcc114919c", 00:15:22.119 "is_configured": true, 00:15:22.119 "data_offset": 0, 00:15:22.119 "data_size": 65536 00:15:22.119 }, 00:15:22.119 { 00:15:22.119 "name": "BaseBdev4", 00:15:22.119 "uuid": "0cfb123f-9102-4270-86da-eaa23927fafd", 00:15:22.119 "is_configured": true, 00:15:22.119 "data_offset": 0, 00:15:22.119 "data_size": 65536 00:15:22.119 } 00:15:22.119 ] 00:15:22.119 } 00:15:22.119 } 00:15:22.119 }' 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:22.119 BaseBdev2 00:15:22.119 BaseBdev3 00:15:22.119 BaseBdev4' 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.119 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.380 [2024-10-13 02:29:40.850673] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.380 "name": "Existed_Raid", 00:15:22.380 "uuid": "4e9c29da-45dd-4ad9-a3b5-1a10266cc5f9", 00:15:22.380 "strip_size_kb": 64, 00:15:22.380 "state": "online", 00:15:22.380 "raid_level": "raid5f", 00:15:22.380 "superblock": false, 00:15:22.380 "num_base_bdevs": 4, 00:15:22.380 "num_base_bdevs_discovered": 3, 00:15:22.380 "num_base_bdevs_operational": 3, 00:15:22.380 "base_bdevs_list": [ 00:15:22.380 { 00:15:22.380 "name": null, 00:15:22.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.380 "is_configured": false, 00:15:22.380 "data_offset": 0, 00:15:22.380 "data_size": 65536 00:15:22.380 }, 00:15:22.380 { 00:15:22.380 "name": "BaseBdev2", 00:15:22.380 "uuid": "ff3b3577-8038-4b26-beb5-b1a2d41463f6", 00:15:22.380 "is_configured": true, 00:15:22.380 "data_offset": 0, 00:15:22.380 "data_size": 65536 00:15:22.380 }, 00:15:22.380 { 00:15:22.380 "name": "BaseBdev3", 00:15:22.380 "uuid": "f0f69a95-4bd4-4628-b2df-2efcc114919c", 00:15:22.380 "is_configured": true, 00:15:22.380 "data_offset": 0, 00:15:22.380 "data_size": 65536 00:15:22.380 }, 00:15:22.380 { 00:15:22.380 "name": "BaseBdev4", 00:15:22.380 "uuid": "0cfb123f-9102-4270-86da-eaa23927fafd", 00:15:22.380 "is_configured": true, 00:15:22.380 "data_offset": 0, 00:15:22.380 "data_size": 65536 00:15:22.380 } 00:15:22.380 ] 00:15:22.380 }' 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.380 02:29:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.640 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:22.640 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:22.640 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:22.640 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.640 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.640 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.901 [2024-10-13 02:29:41.361194] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:22.901 [2024-10-13 02:29:41.361391] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.901 [2024-10-13 02:29:41.372563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.901 [2024-10-13 02:29:41.428553] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.901 [2024-10-13 02:29:41.499714] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:22.901 [2024-10-13 02:29:41.499786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.901 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.901 BaseBdev2 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.171 [ 00:15:23.171 { 00:15:23.171 "name": "BaseBdev2", 00:15:23.171 "aliases": [ 00:15:23.171 "5f136b9e-68c9-46bc-be80-5035524dbb20" 00:15:23.171 ], 00:15:23.171 "product_name": "Malloc disk", 00:15:23.171 "block_size": 512, 00:15:23.171 "num_blocks": 65536, 00:15:23.171 "uuid": "5f136b9e-68c9-46bc-be80-5035524dbb20", 00:15:23.171 "assigned_rate_limits": { 00:15:23.171 "rw_ios_per_sec": 0, 00:15:23.171 "rw_mbytes_per_sec": 0, 00:15:23.171 "r_mbytes_per_sec": 0, 00:15:23.171 "w_mbytes_per_sec": 0 00:15:23.171 }, 00:15:23.171 "claimed": false, 00:15:23.171 "zoned": false, 00:15:23.171 "supported_io_types": { 00:15:23.171 "read": true, 00:15:23.171 "write": true, 00:15:23.171 "unmap": true, 00:15:23.171 "flush": true, 00:15:23.171 "reset": true, 00:15:23.171 "nvme_admin": false, 00:15:23.171 "nvme_io": false, 00:15:23.171 "nvme_io_md": false, 00:15:23.171 "write_zeroes": true, 00:15:23.171 "zcopy": true, 00:15:23.171 "get_zone_info": false, 00:15:23.171 "zone_management": false, 00:15:23.171 "zone_append": false, 00:15:23.171 "compare": false, 00:15:23.171 "compare_and_write": false, 00:15:23.171 "abort": true, 00:15:23.171 "seek_hole": false, 00:15:23.171 "seek_data": false, 00:15:23.171 "copy": true, 00:15:23.171 "nvme_iov_md": false 00:15:23.171 }, 00:15:23.171 "memory_domains": [ 00:15:23.171 { 00:15:23.171 "dma_device_id": "system", 00:15:23.171 "dma_device_type": 1 00:15:23.171 }, 00:15:23.171 { 00:15:23.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.171 "dma_device_type": 2 00:15:23.171 } 00:15:23.171 ], 00:15:23.171 "driver_specific": {} 00:15:23.171 } 00:15:23.171 ] 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.171 BaseBdev3 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.171 [ 00:15:23.171 { 00:15:23.171 "name": "BaseBdev3", 00:15:23.171 "aliases": [ 00:15:23.171 "dc1eb5eb-f23d-4a55-b79d-557942e23104" 00:15:23.171 ], 00:15:23.171 "product_name": "Malloc disk", 00:15:23.171 "block_size": 512, 00:15:23.171 "num_blocks": 65536, 00:15:23.171 "uuid": "dc1eb5eb-f23d-4a55-b79d-557942e23104", 00:15:23.171 "assigned_rate_limits": { 00:15:23.171 "rw_ios_per_sec": 0, 00:15:23.171 "rw_mbytes_per_sec": 0, 00:15:23.171 "r_mbytes_per_sec": 0, 00:15:23.171 "w_mbytes_per_sec": 0 00:15:23.171 }, 00:15:23.171 "claimed": false, 00:15:23.171 "zoned": false, 00:15:23.171 "supported_io_types": { 00:15:23.171 "read": true, 00:15:23.171 "write": true, 00:15:23.171 "unmap": true, 00:15:23.171 "flush": true, 00:15:23.171 "reset": true, 00:15:23.171 "nvme_admin": false, 00:15:23.171 "nvme_io": false, 00:15:23.171 "nvme_io_md": false, 00:15:23.171 "write_zeroes": true, 00:15:23.171 "zcopy": true, 00:15:23.171 "get_zone_info": false, 00:15:23.171 "zone_management": false, 00:15:23.171 "zone_append": false, 00:15:23.171 "compare": false, 00:15:23.171 "compare_and_write": false, 00:15:23.171 "abort": true, 00:15:23.171 "seek_hole": false, 00:15:23.171 "seek_data": false, 00:15:23.171 "copy": true, 00:15:23.171 "nvme_iov_md": false 00:15:23.171 }, 00:15:23.171 "memory_domains": [ 00:15:23.171 { 00:15:23.171 "dma_device_id": "system", 00:15:23.171 "dma_device_type": 1 00:15:23.171 }, 00:15:23.171 { 00:15:23.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.171 "dma_device_type": 2 00:15:23.171 } 00:15:23.171 ], 00:15:23.171 "driver_specific": {} 00:15:23.171 } 00:15:23.171 ] 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.171 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.172 BaseBdev4 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.172 [ 00:15:23.172 { 00:15:23.172 "name": "BaseBdev4", 00:15:23.172 "aliases": [ 00:15:23.172 "a10ef882-df37-43e4-86e5-7127cf57cdac" 00:15:23.172 ], 00:15:23.172 "product_name": "Malloc disk", 00:15:23.172 "block_size": 512, 00:15:23.172 "num_blocks": 65536, 00:15:23.172 "uuid": "a10ef882-df37-43e4-86e5-7127cf57cdac", 00:15:23.172 "assigned_rate_limits": { 00:15:23.172 "rw_ios_per_sec": 0, 00:15:23.172 "rw_mbytes_per_sec": 0, 00:15:23.172 "r_mbytes_per_sec": 0, 00:15:23.172 "w_mbytes_per_sec": 0 00:15:23.172 }, 00:15:23.172 "claimed": false, 00:15:23.172 "zoned": false, 00:15:23.172 "supported_io_types": { 00:15:23.172 "read": true, 00:15:23.172 "write": true, 00:15:23.172 "unmap": true, 00:15:23.172 "flush": true, 00:15:23.172 "reset": true, 00:15:23.172 "nvme_admin": false, 00:15:23.172 "nvme_io": false, 00:15:23.172 "nvme_io_md": false, 00:15:23.172 "write_zeroes": true, 00:15:23.172 "zcopy": true, 00:15:23.172 "get_zone_info": false, 00:15:23.172 "zone_management": false, 00:15:23.172 "zone_append": false, 00:15:23.172 "compare": false, 00:15:23.172 "compare_and_write": false, 00:15:23.172 "abort": true, 00:15:23.172 "seek_hole": false, 00:15:23.172 "seek_data": false, 00:15:23.172 "copy": true, 00:15:23.172 "nvme_iov_md": false 00:15:23.172 }, 00:15:23.172 "memory_domains": [ 00:15:23.172 { 00:15:23.172 "dma_device_id": "system", 00:15:23.172 "dma_device_type": 1 00:15:23.172 }, 00:15:23.172 { 00:15:23.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.172 "dma_device_type": 2 00:15:23.172 } 00:15:23.172 ], 00:15:23.172 "driver_specific": {} 00:15:23.172 } 00:15:23.172 ] 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.172 [2024-10-13 02:29:41.706239] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.172 [2024-10-13 02:29:41.706386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.172 [2024-10-13 02:29:41.706435] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.172 [2024-10-13 02:29:41.708427] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.172 [2024-10-13 02:29:41.708531] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.172 "name": "Existed_Raid", 00:15:23.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.172 "strip_size_kb": 64, 00:15:23.172 "state": "configuring", 00:15:23.172 "raid_level": "raid5f", 00:15:23.172 "superblock": false, 00:15:23.172 "num_base_bdevs": 4, 00:15:23.172 "num_base_bdevs_discovered": 3, 00:15:23.172 "num_base_bdevs_operational": 4, 00:15:23.172 "base_bdevs_list": [ 00:15:23.172 { 00:15:23.172 "name": "BaseBdev1", 00:15:23.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.172 "is_configured": false, 00:15:23.172 "data_offset": 0, 00:15:23.172 "data_size": 0 00:15:23.172 }, 00:15:23.172 { 00:15:23.172 "name": "BaseBdev2", 00:15:23.172 "uuid": "5f136b9e-68c9-46bc-be80-5035524dbb20", 00:15:23.172 "is_configured": true, 00:15:23.172 "data_offset": 0, 00:15:23.172 "data_size": 65536 00:15:23.172 }, 00:15:23.172 { 00:15:23.172 "name": "BaseBdev3", 00:15:23.172 "uuid": "dc1eb5eb-f23d-4a55-b79d-557942e23104", 00:15:23.172 "is_configured": true, 00:15:23.172 "data_offset": 0, 00:15:23.172 "data_size": 65536 00:15:23.172 }, 00:15:23.172 { 00:15:23.172 "name": "BaseBdev4", 00:15:23.172 "uuid": "a10ef882-df37-43e4-86e5-7127cf57cdac", 00:15:23.172 "is_configured": true, 00:15:23.172 "data_offset": 0, 00:15:23.172 "data_size": 65536 00:15:23.172 } 00:15:23.172 ] 00:15:23.172 }' 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.172 02:29:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.758 [2024-10-13 02:29:42.181387] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.758 "name": "Existed_Raid", 00:15:23.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.758 "strip_size_kb": 64, 00:15:23.758 "state": "configuring", 00:15:23.758 "raid_level": "raid5f", 00:15:23.758 "superblock": false, 00:15:23.758 "num_base_bdevs": 4, 00:15:23.758 "num_base_bdevs_discovered": 2, 00:15:23.758 "num_base_bdevs_operational": 4, 00:15:23.758 "base_bdevs_list": [ 00:15:23.758 { 00:15:23.758 "name": "BaseBdev1", 00:15:23.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.758 "is_configured": false, 00:15:23.758 "data_offset": 0, 00:15:23.758 "data_size": 0 00:15:23.758 }, 00:15:23.758 { 00:15:23.758 "name": null, 00:15:23.758 "uuid": "5f136b9e-68c9-46bc-be80-5035524dbb20", 00:15:23.758 "is_configured": false, 00:15:23.758 "data_offset": 0, 00:15:23.758 "data_size": 65536 00:15:23.758 }, 00:15:23.758 { 00:15:23.758 "name": "BaseBdev3", 00:15:23.758 "uuid": "dc1eb5eb-f23d-4a55-b79d-557942e23104", 00:15:23.758 "is_configured": true, 00:15:23.758 "data_offset": 0, 00:15:23.758 "data_size": 65536 00:15:23.758 }, 00:15:23.758 { 00:15:23.758 "name": "BaseBdev4", 00:15:23.758 "uuid": "a10ef882-df37-43e4-86e5-7127cf57cdac", 00:15:23.758 "is_configured": true, 00:15:23.758 "data_offset": 0, 00:15:23.758 "data_size": 65536 00:15:23.758 } 00:15:23.758 ] 00:15:23.758 }' 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.758 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.018 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.018 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:24.018 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.019 [2024-10-13 02:29:42.635692] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.019 BaseBdev1 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.019 [ 00:15:24.019 { 00:15:24.019 "name": "BaseBdev1", 00:15:24.019 "aliases": [ 00:15:24.019 "f8cdf6a6-2fed-43d2-ae2b-04e35163ba82" 00:15:24.019 ], 00:15:24.019 "product_name": "Malloc disk", 00:15:24.019 "block_size": 512, 00:15:24.019 "num_blocks": 65536, 00:15:24.019 "uuid": "f8cdf6a6-2fed-43d2-ae2b-04e35163ba82", 00:15:24.019 "assigned_rate_limits": { 00:15:24.019 "rw_ios_per_sec": 0, 00:15:24.019 "rw_mbytes_per_sec": 0, 00:15:24.019 "r_mbytes_per_sec": 0, 00:15:24.019 "w_mbytes_per_sec": 0 00:15:24.019 }, 00:15:24.019 "claimed": true, 00:15:24.019 "claim_type": "exclusive_write", 00:15:24.019 "zoned": false, 00:15:24.019 "supported_io_types": { 00:15:24.019 "read": true, 00:15:24.019 "write": true, 00:15:24.019 "unmap": true, 00:15:24.019 "flush": true, 00:15:24.019 "reset": true, 00:15:24.019 "nvme_admin": false, 00:15:24.019 "nvme_io": false, 00:15:24.019 "nvme_io_md": false, 00:15:24.019 "write_zeroes": true, 00:15:24.019 "zcopy": true, 00:15:24.019 "get_zone_info": false, 00:15:24.019 "zone_management": false, 00:15:24.019 "zone_append": false, 00:15:24.019 "compare": false, 00:15:24.019 "compare_and_write": false, 00:15:24.019 "abort": true, 00:15:24.019 "seek_hole": false, 00:15:24.019 "seek_data": false, 00:15:24.019 "copy": true, 00:15:24.019 "nvme_iov_md": false 00:15:24.019 }, 00:15:24.019 "memory_domains": [ 00:15:24.019 { 00:15:24.019 "dma_device_id": "system", 00:15:24.019 "dma_device_type": 1 00:15:24.019 }, 00:15:24.019 { 00:15:24.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.019 "dma_device_type": 2 00:15:24.019 } 00:15:24.019 ], 00:15:24.019 "driver_specific": {} 00:15:24.019 } 00:15:24.019 ] 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.019 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.279 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.279 "name": "Existed_Raid", 00:15:24.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.279 "strip_size_kb": 64, 00:15:24.279 "state": "configuring", 00:15:24.279 "raid_level": "raid5f", 00:15:24.279 "superblock": false, 00:15:24.279 "num_base_bdevs": 4, 00:15:24.279 "num_base_bdevs_discovered": 3, 00:15:24.279 "num_base_bdevs_operational": 4, 00:15:24.279 "base_bdevs_list": [ 00:15:24.279 { 00:15:24.279 "name": "BaseBdev1", 00:15:24.279 "uuid": "f8cdf6a6-2fed-43d2-ae2b-04e35163ba82", 00:15:24.279 "is_configured": true, 00:15:24.279 "data_offset": 0, 00:15:24.279 "data_size": 65536 00:15:24.279 }, 00:15:24.279 { 00:15:24.279 "name": null, 00:15:24.279 "uuid": "5f136b9e-68c9-46bc-be80-5035524dbb20", 00:15:24.279 "is_configured": false, 00:15:24.279 "data_offset": 0, 00:15:24.279 "data_size": 65536 00:15:24.279 }, 00:15:24.279 { 00:15:24.279 "name": "BaseBdev3", 00:15:24.279 "uuid": "dc1eb5eb-f23d-4a55-b79d-557942e23104", 00:15:24.279 "is_configured": true, 00:15:24.279 "data_offset": 0, 00:15:24.279 "data_size": 65536 00:15:24.279 }, 00:15:24.279 { 00:15:24.279 "name": "BaseBdev4", 00:15:24.279 "uuid": "a10ef882-df37-43e4-86e5-7127cf57cdac", 00:15:24.279 "is_configured": true, 00:15:24.279 "data_offset": 0, 00:15:24.279 "data_size": 65536 00:15:24.279 } 00:15:24.279 ] 00:15:24.279 }' 00:15:24.279 02:29:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.279 02:29:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.540 [2024-10-13 02:29:43.147516] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.540 "name": "Existed_Raid", 00:15:24.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.540 "strip_size_kb": 64, 00:15:24.540 "state": "configuring", 00:15:24.540 "raid_level": "raid5f", 00:15:24.540 "superblock": false, 00:15:24.540 "num_base_bdevs": 4, 00:15:24.540 "num_base_bdevs_discovered": 2, 00:15:24.540 "num_base_bdevs_operational": 4, 00:15:24.540 "base_bdevs_list": [ 00:15:24.540 { 00:15:24.540 "name": "BaseBdev1", 00:15:24.540 "uuid": "f8cdf6a6-2fed-43d2-ae2b-04e35163ba82", 00:15:24.540 "is_configured": true, 00:15:24.540 "data_offset": 0, 00:15:24.540 "data_size": 65536 00:15:24.540 }, 00:15:24.540 { 00:15:24.540 "name": null, 00:15:24.540 "uuid": "5f136b9e-68c9-46bc-be80-5035524dbb20", 00:15:24.540 "is_configured": false, 00:15:24.540 "data_offset": 0, 00:15:24.540 "data_size": 65536 00:15:24.540 }, 00:15:24.540 { 00:15:24.540 "name": null, 00:15:24.540 "uuid": "dc1eb5eb-f23d-4a55-b79d-557942e23104", 00:15:24.540 "is_configured": false, 00:15:24.540 "data_offset": 0, 00:15:24.540 "data_size": 65536 00:15:24.540 }, 00:15:24.540 { 00:15:24.540 "name": "BaseBdev4", 00:15:24.540 "uuid": "a10ef882-df37-43e4-86e5-7127cf57cdac", 00:15:24.540 "is_configured": true, 00:15:24.540 "data_offset": 0, 00:15:24.540 "data_size": 65536 00:15:24.540 } 00:15:24.540 ] 00:15:24.540 }' 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.540 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.111 [2024-10-13 02:29:43.638783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.111 "name": "Existed_Raid", 00:15:25.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.111 "strip_size_kb": 64, 00:15:25.111 "state": "configuring", 00:15:25.111 "raid_level": "raid5f", 00:15:25.111 "superblock": false, 00:15:25.111 "num_base_bdevs": 4, 00:15:25.111 "num_base_bdevs_discovered": 3, 00:15:25.111 "num_base_bdevs_operational": 4, 00:15:25.111 "base_bdevs_list": [ 00:15:25.111 { 00:15:25.111 "name": "BaseBdev1", 00:15:25.111 "uuid": "f8cdf6a6-2fed-43d2-ae2b-04e35163ba82", 00:15:25.111 "is_configured": true, 00:15:25.111 "data_offset": 0, 00:15:25.111 "data_size": 65536 00:15:25.111 }, 00:15:25.111 { 00:15:25.111 "name": null, 00:15:25.111 "uuid": "5f136b9e-68c9-46bc-be80-5035524dbb20", 00:15:25.111 "is_configured": false, 00:15:25.111 "data_offset": 0, 00:15:25.111 "data_size": 65536 00:15:25.111 }, 00:15:25.111 { 00:15:25.111 "name": "BaseBdev3", 00:15:25.111 "uuid": "dc1eb5eb-f23d-4a55-b79d-557942e23104", 00:15:25.111 "is_configured": true, 00:15:25.111 "data_offset": 0, 00:15:25.111 "data_size": 65536 00:15:25.111 }, 00:15:25.111 { 00:15:25.111 "name": "BaseBdev4", 00:15:25.111 "uuid": "a10ef882-df37-43e4-86e5-7127cf57cdac", 00:15:25.111 "is_configured": true, 00:15:25.111 "data_offset": 0, 00:15:25.111 "data_size": 65536 00:15:25.111 } 00:15:25.111 ] 00:15:25.111 }' 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.111 02:29:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.681 [2024-10-13 02:29:44.165882] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.681 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.681 "name": "Existed_Raid", 00:15:25.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.681 "strip_size_kb": 64, 00:15:25.681 "state": "configuring", 00:15:25.681 "raid_level": "raid5f", 00:15:25.681 "superblock": false, 00:15:25.681 "num_base_bdevs": 4, 00:15:25.681 "num_base_bdevs_discovered": 2, 00:15:25.681 "num_base_bdevs_operational": 4, 00:15:25.681 "base_bdevs_list": [ 00:15:25.681 { 00:15:25.682 "name": null, 00:15:25.682 "uuid": "f8cdf6a6-2fed-43d2-ae2b-04e35163ba82", 00:15:25.682 "is_configured": false, 00:15:25.682 "data_offset": 0, 00:15:25.682 "data_size": 65536 00:15:25.682 }, 00:15:25.682 { 00:15:25.682 "name": null, 00:15:25.682 "uuid": "5f136b9e-68c9-46bc-be80-5035524dbb20", 00:15:25.682 "is_configured": false, 00:15:25.682 "data_offset": 0, 00:15:25.682 "data_size": 65536 00:15:25.682 }, 00:15:25.682 { 00:15:25.682 "name": "BaseBdev3", 00:15:25.682 "uuid": "dc1eb5eb-f23d-4a55-b79d-557942e23104", 00:15:25.682 "is_configured": true, 00:15:25.682 "data_offset": 0, 00:15:25.682 "data_size": 65536 00:15:25.682 }, 00:15:25.682 { 00:15:25.682 "name": "BaseBdev4", 00:15:25.682 "uuid": "a10ef882-df37-43e4-86e5-7127cf57cdac", 00:15:25.682 "is_configured": true, 00:15:25.682 "data_offset": 0, 00:15:25.682 "data_size": 65536 00:15:25.682 } 00:15:25.682 ] 00:15:25.682 }' 00:15:25.682 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.682 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.942 [2024-10-13 02:29:44.611770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.942 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.202 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.202 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.202 "name": "Existed_Raid", 00:15:26.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.202 "strip_size_kb": 64, 00:15:26.202 "state": "configuring", 00:15:26.202 "raid_level": "raid5f", 00:15:26.202 "superblock": false, 00:15:26.202 "num_base_bdevs": 4, 00:15:26.202 "num_base_bdevs_discovered": 3, 00:15:26.202 "num_base_bdevs_operational": 4, 00:15:26.202 "base_bdevs_list": [ 00:15:26.202 { 00:15:26.202 "name": null, 00:15:26.202 "uuid": "f8cdf6a6-2fed-43d2-ae2b-04e35163ba82", 00:15:26.202 "is_configured": false, 00:15:26.202 "data_offset": 0, 00:15:26.202 "data_size": 65536 00:15:26.202 }, 00:15:26.202 { 00:15:26.202 "name": "BaseBdev2", 00:15:26.203 "uuid": "5f136b9e-68c9-46bc-be80-5035524dbb20", 00:15:26.203 "is_configured": true, 00:15:26.203 "data_offset": 0, 00:15:26.203 "data_size": 65536 00:15:26.203 }, 00:15:26.203 { 00:15:26.203 "name": "BaseBdev3", 00:15:26.203 "uuid": "dc1eb5eb-f23d-4a55-b79d-557942e23104", 00:15:26.203 "is_configured": true, 00:15:26.203 "data_offset": 0, 00:15:26.203 "data_size": 65536 00:15:26.203 }, 00:15:26.203 { 00:15:26.203 "name": "BaseBdev4", 00:15:26.203 "uuid": "a10ef882-df37-43e4-86e5-7127cf57cdac", 00:15:26.203 "is_configured": true, 00:15:26.203 "data_offset": 0, 00:15:26.203 "data_size": 65536 00:15:26.203 } 00:15:26.203 ] 00:15:26.203 }' 00:15:26.203 02:29:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.203 02:29:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.463 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:26.463 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.463 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.463 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.463 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.463 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:26.463 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.463 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:26.463 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.463 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.463 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.463 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f8cdf6a6-2fed-43d2-ae2b-04e35163ba82 00:15:26.463 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.463 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.722 [2024-10-13 02:29:45.153934] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:26.722 [2024-10-13 02:29:45.153991] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:26.722 [2024-10-13 02:29:45.153998] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:26.722 [2024-10-13 02:29:45.154295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:15:26.722 [2024-10-13 02:29:45.154737] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:26.722 [2024-10-13 02:29:45.154758] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:15:26.722 [2024-10-13 02:29:45.154954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.722 NewBaseBdev 00:15:26.722 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.722 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:26.722 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:26.722 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:26.722 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:26.722 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:26.722 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:26.722 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:26.722 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.722 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.722 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.722 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:26.722 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.722 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.722 [ 00:15:26.722 { 00:15:26.722 "name": "NewBaseBdev", 00:15:26.722 "aliases": [ 00:15:26.722 "f8cdf6a6-2fed-43d2-ae2b-04e35163ba82" 00:15:26.722 ], 00:15:26.722 "product_name": "Malloc disk", 00:15:26.722 "block_size": 512, 00:15:26.722 "num_blocks": 65536, 00:15:26.722 "uuid": "f8cdf6a6-2fed-43d2-ae2b-04e35163ba82", 00:15:26.722 "assigned_rate_limits": { 00:15:26.722 "rw_ios_per_sec": 0, 00:15:26.722 "rw_mbytes_per_sec": 0, 00:15:26.722 "r_mbytes_per_sec": 0, 00:15:26.722 "w_mbytes_per_sec": 0 00:15:26.722 }, 00:15:26.722 "claimed": true, 00:15:26.723 "claim_type": "exclusive_write", 00:15:26.723 "zoned": false, 00:15:26.723 "supported_io_types": { 00:15:26.723 "read": true, 00:15:26.723 "write": true, 00:15:26.723 "unmap": true, 00:15:26.723 "flush": true, 00:15:26.723 "reset": true, 00:15:26.723 "nvme_admin": false, 00:15:26.723 "nvme_io": false, 00:15:26.723 "nvme_io_md": false, 00:15:26.723 "write_zeroes": true, 00:15:26.723 "zcopy": true, 00:15:26.723 "get_zone_info": false, 00:15:26.723 "zone_management": false, 00:15:26.723 "zone_append": false, 00:15:26.723 "compare": false, 00:15:26.723 "compare_and_write": false, 00:15:26.723 "abort": true, 00:15:26.723 "seek_hole": false, 00:15:26.723 "seek_data": false, 00:15:26.723 "copy": true, 00:15:26.723 "nvme_iov_md": false 00:15:26.723 }, 00:15:26.723 "memory_domains": [ 00:15:26.723 { 00:15:26.723 "dma_device_id": "system", 00:15:26.723 "dma_device_type": 1 00:15:26.723 }, 00:15:26.723 { 00:15:26.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.723 "dma_device_type": 2 00:15:26.723 } 00:15:26.723 ], 00:15:26.723 "driver_specific": {} 00:15:26.723 } 00:15:26.723 ] 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.723 "name": "Existed_Raid", 00:15:26.723 "uuid": "9cf4430b-decb-4a16-89e2-bf7feffdf571", 00:15:26.723 "strip_size_kb": 64, 00:15:26.723 "state": "online", 00:15:26.723 "raid_level": "raid5f", 00:15:26.723 "superblock": false, 00:15:26.723 "num_base_bdevs": 4, 00:15:26.723 "num_base_bdevs_discovered": 4, 00:15:26.723 "num_base_bdevs_operational": 4, 00:15:26.723 "base_bdevs_list": [ 00:15:26.723 { 00:15:26.723 "name": "NewBaseBdev", 00:15:26.723 "uuid": "f8cdf6a6-2fed-43d2-ae2b-04e35163ba82", 00:15:26.723 "is_configured": true, 00:15:26.723 "data_offset": 0, 00:15:26.723 "data_size": 65536 00:15:26.723 }, 00:15:26.723 { 00:15:26.723 "name": "BaseBdev2", 00:15:26.723 "uuid": "5f136b9e-68c9-46bc-be80-5035524dbb20", 00:15:26.723 "is_configured": true, 00:15:26.723 "data_offset": 0, 00:15:26.723 "data_size": 65536 00:15:26.723 }, 00:15:26.723 { 00:15:26.723 "name": "BaseBdev3", 00:15:26.723 "uuid": "dc1eb5eb-f23d-4a55-b79d-557942e23104", 00:15:26.723 "is_configured": true, 00:15:26.723 "data_offset": 0, 00:15:26.723 "data_size": 65536 00:15:26.723 }, 00:15:26.723 { 00:15:26.723 "name": "BaseBdev4", 00:15:26.723 "uuid": "a10ef882-df37-43e4-86e5-7127cf57cdac", 00:15:26.723 "is_configured": true, 00:15:26.723 "data_offset": 0, 00:15:26.723 "data_size": 65536 00:15:26.723 } 00:15:26.723 ] 00:15:26.723 }' 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.723 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.983 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:26.983 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:26.983 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:26.983 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:26.983 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:26.983 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:26.983 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:26.983 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:26.983 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.983 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.983 [2024-10-13 02:29:45.645352] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:27.244 "name": "Existed_Raid", 00:15:27.244 "aliases": [ 00:15:27.244 "9cf4430b-decb-4a16-89e2-bf7feffdf571" 00:15:27.244 ], 00:15:27.244 "product_name": "Raid Volume", 00:15:27.244 "block_size": 512, 00:15:27.244 "num_blocks": 196608, 00:15:27.244 "uuid": "9cf4430b-decb-4a16-89e2-bf7feffdf571", 00:15:27.244 "assigned_rate_limits": { 00:15:27.244 "rw_ios_per_sec": 0, 00:15:27.244 "rw_mbytes_per_sec": 0, 00:15:27.244 "r_mbytes_per_sec": 0, 00:15:27.244 "w_mbytes_per_sec": 0 00:15:27.244 }, 00:15:27.244 "claimed": false, 00:15:27.244 "zoned": false, 00:15:27.244 "supported_io_types": { 00:15:27.244 "read": true, 00:15:27.244 "write": true, 00:15:27.244 "unmap": false, 00:15:27.244 "flush": false, 00:15:27.244 "reset": true, 00:15:27.244 "nvme_admin": false, 00:15:27.244 "nvme_io": false, 00:15:27.244 "nvme_io_md": false, 00:15:27.244 "write_zeroes": true, 00:15:27.244 "zcopy": false, 00:15:27.244 "get_zone_info": false, 00:15:27.244 "zone_management": false, 00:15:27.244 "zone_append": false, 00:15:27.244 "compare": false, 00:15:27.244 "compare_and_write": false, 00:15:27.244 "abort": false, 00:15:27.244 "seek_hole": false, 00:15:27.244 "seek_data": false, 00:15:27.244 "copy": false, 00:15:27.244 "nvme_iov_md": false 00:15:27.244 }, 00:15:27.244 "driver_specific": { 00:15:27.244 "raid": { 00:15:27.244 "uuid": "9cf4430b-decb-4a16-89e2-bf7feffdf571", 00:15:27.244 "strip_size_kb": 64, 00:15:27.244 "state": "online", 00:15:27.244 "raid_level": "raid5f", 00:15:27.244 "superblock": false, 00:15:27.244 "num_base_bdevs": 4, 00:15:27.244 "num_base_bdevs_discovered": 4, 00:15:27.244 "num_base_bdevs_operational": 4, 00:15:27.244 "base_bdevs_list": [ 00:15:27.244 { 00:15:27.244 "name": "NewBaseBdev", 00:15:27.244 "uuid": "f8cdf6a6-2fed-43d2-ae2b-04e35163ba82", 00:15:27.244 "is_configured": true, 00:15:27.244 "data_offset": 0, 00:15:27.244 "data_size": 65536 00:15:27.244 }, 00:15:27.244 { 00:15:27.244 "name": "BaseBdev2", 00:15:27.244 "uuid": "5f136b9e-68c9-46bc-be80-5035524dbb20", 00:15:27.244 "is_configured": true, 00:15:27.244 "data_offset": 0, 00:15:27.244 "data_size": 65536 00:15:27.244 }, 00:15:27.244 { 00:15:27.244 "name": "BaseBdev3", 00:15:27.244 "uuid": "dc1eb5eb-f23d-4a55-b79d-557942e23104", 00:15:27.244 "is_configured": true, 00:15:27.244 "data_offset": 0, 00:15:27.244 "data_size": 65536 00:15:27.244 }, 00:15:27.244 { 00:15:27.244 "name": "BaseBdev4", 00:15:27.244 "uuid": "a10ef882-df37-43e4-86e5-7127cf57cdac", 00:15:27.244 "is_configured": true, 00:15:27.244 "data_offset": 0, 00:15:27.244 "data_size": 65536 00:15:27.244 } 00:15:27.244 ] 00:15:27.244 } 00:15:27.244 } 00:15:27.244 }' 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:27.244 BaseBdev2 00:15:27.244 BaseBdev3 00:15:27.244 BaseBdev4' 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.244 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.505 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.505 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.505 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:27.505 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.505 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.505 [2024-10-13 02:29:45.952690] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:27.505 [2024-10-13 02:29:45.952738] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.505 [2024-10-13 02:29:45.952832] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.505 [2024-10-13 02:29:45.953102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.505 [2024-10-13 02:29:45.953120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:15:27.505 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.505 02:29:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93180 00:15:27.505 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 93180 ']' 00:15:27.505 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 93180 00:15:27.505 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:27.505 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:27.505 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93180 00:15:27.505 killing process with pid 93180 00:15:27.505 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:27.505 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:27.505 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93180' 00:15:27.505 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 93180 00:15:27.505 02:29:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 93180 00:15:27.505 [2024-10-13 02:29:45.998160] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:27.505 [2024-10-13 02:29:46.039076] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:27.765 ************************************ 00:15:27.765 END TEST raid5f_state_function_test 00:15:27.765 ************************************ 00:15:27.765 02:29:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:27.765 00:15:27.765 real 0m9.669s 00:15:27.765 user 0m16.540s 00:15:27.765 sys 0m2.015s 00:15:27.765 02:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:27.765 02:29:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.765 02:29:46 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:27.765 02:29:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:27.765 02:29:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:27.765 02:29:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:27.765 ************************************ 00:15:27.765 START TEST raid5f_state_function_test_sb 00:15:27.765 ************************************ 00:15:27.765 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:15:27.765 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=93829 00:15:27.766 Process raid pid: 93829 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93829' 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 93829 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 93829 ']' 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:27.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:27.766 02:29:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.766 [2024-10-13 02:29:46.428423] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:27.766 [2024-10-13 02:29:46.428587] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.026 [2024-10-13 02:29:46.574738] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.026 [2024-10-13 02:29:46.624964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.026 [2024-10-13 02:29:46.666954] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.026 [2024-10-13 02:29:46.666998] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.967 [2024-10-13 02:29:47.324378] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:28.967 [2024-10-13 02:29:47.324455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:28.967 [2024-10-13 02:29:47.324475] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:28.967 [2024-10-13 02:29:47.324486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:28.967 [2024-10-13 02:29:47.324492] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:28.967 [2024-10-13 02:29:47.324503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:28.967 [2024-10-13 02:29:47.324509] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:28.967 [2024-10-13 02:29:47.324517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.967 "name": "Existed_Raid", 00:15:28.967 "uuid": "238a4962-d156-46f6-b642-dd6af5cdfa69", 00:15:28.967 "strip_size_kb": 64, 00:15:28.967 "state": "configuring", 00:15:28.967 "raid_level": "raid5f", 00:15:28.967 "superblock": true, 00:15:28.967 "num_base_bdevs": 4, 00:15:28.967 "num_base_bdevs_discovered": 0, 00:15:28.967 "num_base_bdevs_operational": 4, 00:15:28.967 "base_bdevs_list": [ 00:15:28.967 { 00:15:28.967 "name": "BaseBdev1", 00:15:28.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.967 "is_configured": false, 00:15:28.967 "data_offset": 0, 00:15:28.967 "data_size": 0 00:15:28.967 }, 00:15:28.967 { 00:15:28.967 "name": "BaseBdev2", 00:15:28.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.967 "is_configured": false, 00:15:28.967 "data_offset": 0, 00:15:28.967 "data_size": 0 00:15:28.967 }, 00:15:28.967 { 00:15:28.967 "name": "BaseBdev3", 00:15:28.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.967 "is_configured": false, 00:15:28.967 "data_offset": 0, 00:15:28.967 "data_size": 0 00:15:28.967 }, 00:15:28.967 { 00:15:28.967 "name": "BaseBdev4", 00:15:28.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.967 "is_configured": false, 00:15:28.967 "data_offset": 0, 00:15:28.967 "data_size": 0 00:15:28.967 } 00:15:28.967 ] 00:15:28.967 }' 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.967 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.227 [2024-10-13 02:29:47.763484] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:29.227 [2024-10-13 02:29:47.763542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.227 [2024-10-13 02:29:47.775489] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:29.227 [2024-10-13 02:29:47.775542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:29.227 [2024-10-13 02:29:47.775566] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.227 [2024-10-13 02:29:47.775575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.227 [2024-10-13 02:29:47.775581] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:29.227 [2024-10-13 02:29:47.775590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:29.227 [2024-10-13 02:29:47.775596] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:29.227 [2024-10-13 02:29:47.775604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.227 [2024-10-13 02:29:47.796421] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.227 BaseBdev1 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.227 [ 00:15:29.227 { 00:15:29.227 "name": "BaseBdev1", 00:15:29.227 "aliases": [ 00:15:29.227 "a46a5df6-e5bc-4b1b-9e38-e3e648f86485" 00:15:29.227 ], 00:15:29.227 "product_name": "Malloc disk", 00:15:29.227 "block_size": 512, 00:15:29.227 "num_blocks": 65536, 00:15:29.227 "uuid": "a46a5df6-e5bc-4b1b-9e38-e3e648f86485", 00:15:29.227 "assigned_rate_limits": { 00:15:29.227 "rw_ios_per_sec": 0, 00:15:29.227 "rw_mbytes_per_sec": 0, 00:15:29.227 "r_mbytes_per_sec": 0, 00:15:29.227 "w_mbytes_per_sec": 0 00:15:29.227 }, 00:15:29.227 "claimed": true, 00:15:29.227 "claim_type": "exclusive_write", 00:15:29.227 "zoned": false, 00:15:29.227 "supported_io_types": { 00:15:29.227 "read": true, 00:15:29.227 "write": true, 00:15:29.227 "unmap": true, 00:15:29.227 "flush": true, 00:15:29.227 "reset": true, 00:15:29.227 "nvme_admin": false, 00:15:29.227 "nvme_io": false, 00:15:29.227 "nvme_io_md": false, 00:15:29.227 "write_zeroes": true, 00:15:29.227 "zcopy": true, 00:15:29.227 "get_zone_info": false, 00:15:29.227 "zone_management": false, 00:15:29.227 "zone_append": false, 00:15:29.227 "compare": false, 00:15:29.227 "compare_and_write": false, 00:15:29.227 "abort": true, 00:15:29.227 "seek_hole": false, 00:15:29.227 "seek_data": false, 00:15:29.227 "copy": true, 00:15:29.227 "nvme_iov_md": false 00:15:29.227 }, 00:15:29.227 "memory_domains": [ 00:15:29.227 { 00:15:29.227 "dma_device_id": "system", 00:15:29.227 "dma_device_type": 1 00:15:29.227 }, 00:15:29.227 { 00:15:29.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.227 "dma_device_type": 2 00:15:29.227 } 00:15:29.227 ], 00:15:29.227 "driver_specific": {} 00:15:29.227 } 00:15:29.227 ] 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.227 "name": "Existed_Raid", 00:15:29.227 "uuid": "03a57a73-421e-40bb-a481-22cd0c03d64a", 00:15:29.227 "strip_size_kb": 64, 00:15:29.227 "state": "configuring", 00:15:29.227 "raid_level": "raid5f", 00:15:29.227 "superblock": true, 00:15:29.227 "num_base_bdevs": 4, 00:15:29.227 "num_base_bdevs_discovered": 1, 00:15:29.227 "num_base_bdevs_operational": 4, 00:15:29.227 "base_bdevs_list": [ 00:15:29.227 { 00:15:29.227 "name": "BaseBdev1", 00:15:29.227 "uuid": "a46a5df6-e5bc-4b1b-9e38-e3e648f86485", 00:15:29.227 "is_configured": true, 00:15:29.227 "data_offset": 2048, 00:15:29.227 "data_size": 63488 00:15:29.227 }, 00:15:29.227 { 00:15:29.227 "name": "BaseBdev2", 00:15:29.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.227 "is_configured": false, 00:15:29.227 "data_offset": 0, 00:15:29.227 "data_size": 0 00:15:29.227 }, 00:15:29.227 { 00:15:29.227 "name": "BaseBdev3", 00:15:29.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.227 "is_configured": false, 00:15:29.227 "data_offset": 0, 00:15:29.227 "data_size": 0 00:15:29.227 }, 00:15:29.227 { 00:15:29.227 "name": "BaseBdev4", 00:15:29.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.227 "is_configured": false, 00:15:29.227 "data_offset": 0, 00:15:29.227 "data_size": 0 00:15:29.227 } 00:15:29.227 ] 00:15:29.227 }' 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.227 02:29:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.796 [2024-10-13 02:29:48.271695] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:29.796 [2024-10-13 02:29:48.271768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.796 [2024-10-13 02:29:48.283763] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.796 [2024-10-13 02:29:48.285648] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.796 [2024-10-13 02:29:48.285691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.796 [2024-10-13 02:29:48.285700] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:29.796 [2024-10-13 02:29:48.285709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:29.796 [2024-10-13 02:29:48.285716] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:29.796 [2024-10-13 02:29:48.285724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.796 "name": "Existed_Raid", 00:15:29.796 "uuid": "6398739a-6533-4d34-92e5-13315b48ccb2", 00:15:29.796 "strip_size_kb": 64, 00:15:29.796 "state": "configuring", 00:15:29.796 "raid_level": "raid5f", 00:15:29.796 "superblock": true, 00:15:29.796 "num_base_bdevs": 4, 00:15:29.796 "num_base_bdevs_discovered": 1, 00:15:29.796 "num_base_bdevs_operational": 4, 00:15:29.796 "base_bdevs_list": [ 00:15:29.796 { 00:15:29.796 "name": "BaseBdev1", 00:15:29.796 "uuid": "a46a5df6-e5bc-4b1b-9e38-e3e648f86485", 00:15:29.796 "is_configured": true, 00:15:29.796 "data_offset": 2048, 00:15:29.796 "data_size": 63488 00:15:29.796 }, 00:15:29.796 { 00:15:29.796 "name": "BaseBdev2", 00:15:29.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.796 "is_configured": false, 00:15:29.796 "data_offset": 0, 00:15:29.796 "data_size": 0 00:15:29.796 }, 00:15:29.796 { 00:15:29.796 "name": "BaseBdev3", 00:15:29.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.796 "is_configured": false, 00:15:29.796 "data_offset": 0, 00:15:29.796 "data_size": 0 00:15:29.796 }, 00:15:29.796 { 00:15:29.796 "name": "BaseBdev4", 00:15:29.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.796 "is_configured": false, 00:15:29.796 "data_offset": 0, 00:15:29.796 "data_size": 0 00:15:29.796 } 00:15:29.796 ] 00:15:29.796 }' 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.796 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.365 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:30.365 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.365 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.365 [2024-10-13 02:29:48.775549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.365 BaseBdev2 00:15:30.365 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.365 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:30.365 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:30.365 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:30.365 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:30.365 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:30.365 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:30.365 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:30.365 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.365 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.365 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.365 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:30.365 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.365 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.365 [ 00:15:30.365 { 00:15:30.365 "name": "BaseBdev2", 00:15:30.365 "aliases": [ 00:15:30.365 "51be063e-92b0-435e-b9ff-97cdcef77c7b" 00:15:30.365 ], 00:15:30.365 "product_name": "Malloc disk", 00:15:30.365 "block_size": 512, 00:15:30.365 "num_blocks": 65536, 00:15:30.365 "uuid": "51be063e-92b0-435e-b9ff-97cdcef77c7b", 00:15:30.365 "assigned_rate_limits": { 00:15:30.365 "rw_ios_per_sec": 0, 00:15:30.365 "rw_mbytes_per_sec": 0, 00:15:30.365 "r_mbytes_per_sec": 0, 00:15:30.365 "w_mbytes_per_sec": 0 00:15:30.366 }, 00:15:30.366 "claimed": true, 00:15:30.366 "claim_type": "exclusive_write", 00:15:30.366 "zoned": false, 00:15:30.366 "supported_io_types": { 00:15:30.366 "read": true, 00:15:30.366 "write": true, 00:15:30.366 "unmap": true, 00:15:30.366 "flush": true, 00:15:30.366 "reset": true, 00:15:30.366 "nvme_admin": false, 00:15:30.366 "nvme_io": false, 00:15:30.366 "nvme_io_md": false, 00:15:30.366 "write_zeroes": true, 00:15:30.366 "zcopy": true, 00:15:30.366 "get_zone_info": false, 00:15:30.366 "zone_management": false, 00:15:30.366 "zone_append": false, 00:15:30.366 "compare": false, 00:15:30.366 "compare_and_write": false, 00:15:30.366 "abort": true, 00:15:30.366 "seek_hole": false, 00:15:30.366 "seek_data": false, 00:15:30.366 "copy": true, 00:15:30.366 "nvme_iov_md": false 00:15:30.366 }, 00:15:30.366 "memory_domains": [ 00:15:30.366 { 00:15:30.366 "dma_device_id": "system", 00:15:30.366 "dma_device_type": 1 00:15:30.366 }, 00:15:30.366 { 00:15:30.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.366 "dma_device_type": 2 00:15:30.366 } 00:15:30.366 ], 00:15:30.366 "driver_specific": {} 00:15:30.366 } 00:15:30.366 ] 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.366 "name": "Existed_Raid", 00:15:30.366 "uuid": "6398739a-6533-4d34-92e5-13315b48ccb2", 00:15:30.366 "strip_size_kb": 64, 00:15:30.366 "state": "configuring", 00:15:30.366 "raid_level": "raid5f", 00:15:30.366 "superblock": true, 00:15:30.366 "num_base_bdevs": 4, 00:15:30.366 "num_base_bdevs_discovered": 2, 00:15:30.366 "num_base_bdevs_operational": 4, 00:15:30.366 "base_bdevs_list": [ 00:15:30.366 { 00:15:30.366 "name": "BaseBdev1", 00:15:30.366 "uuid": "a46a5df6-e5bc-4b1b-9e38-e3e648f86485", 00:15:30.366 "is_configured": true, 00:15:30.366 "data_offset": 2048, 00:15:30.366 "data_size": 63488 00:15:30.366 }, 00:15:30.366 { 00:15:30.366 "name": "BaseBdev2", 00:15:30.366 "uuid": "51be063e-92b0-435e-b9ff-97cdcef77c7b", 00:15:30.366 "is_configured": true, 00:15:30.366 "data_offset": 2048, 00:15:30.366 "data_size": 63488 00:15:30.366 }, 00:15:30.366 { 00:15:30.366 "name": "BaseBdev3", 00:15:30.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.366 "is_configured": false, 00:15:30.366 "data_offset": 0, 00:15:30.366 "data_size": 0 00:15:30.366 }, 00:15:30.366 { 00:15:30.366 "name": "BaseBdev4", 00:15:30.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.366 "is_configured": false, 00:15:30.366 "data_offset": 0, 00:15:30.366 "data_size": 0 00:15:30.366 } 00:15:30.366 ] 00:15:30.366 }' 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.366 02:29:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.626 [2024-10-13 02:29:49.277776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:30.626 BaseBdev3 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.626 [ 00:15:30.626 { 00:15:30.626 "name": "BaseBdev3", 00:15:30.626 "aliases": [ 00:15:30.626 "2310032e-1318-4810-9283-b4a10bc9453f" 00:15:30.626 ], 00:15:30.626 "product_name": "Malloc disk", 00:15:30.626 "block_size": 512, 00:15:30.626 "num_blocks": 65536, 00:15:30.626 "uuid": "2310032e-1318-4810-9283-b4a10bc9453f", 00:15:30.626 "assigned_rate_limits": { 00:15:30.626 "rw_ios_per_sec": 0, 00:15:30.626 "rw_mbytes_per_sec": 0, 00:15:30.626 "r_mbytes_per_sec": 0, 00:15:30.626 "w_mbytes_per_sec": 0 00:15:30.626 }, 00:15:30.626 "claimed": true, 00:15:30.626 "claim_type": "exclusive_write", 00:15:30.626 "zoned": false, 00:15:30.626 "supported_io_types": { 00:15:30.626 "read": true, 00:15:30.626 "write": true, 00:15:30.626 "unmap": true, 00:15:30.626 "flush": true, 00:15:30.626 "reset": true, 00:15:30.626 "nvme_admin": false, 00:15:30.626 "nvme_io": false, 00:15:30.626 "nvme_io_md": false, 00:15:30.626 "write_zeroes": true, 00:15:30.626 "zcopy": true, 00:15:30.626 "get_zone_info": false, 00:15:30.626 "zone_management": false, 00:15:30.626 "zone_append": false, 00:15:30.626 "compare": false, 00:15:30.626 "compare_and_write": false, 00:15:30.626 "abort": true, 00:15:30.626 "seek_hole": false, 00:15:30.626 "seek_data": false, 00:15:30.626 "copy": true, 00:15:30.626 "nvme_iov_md": false 00:15:30.626 }, 00:15:30.626 "memory_domains": [ 00:15:30.626 { 00:15:30.626 "dma_device_id": "system", 00:15:30.626 "dma_device_type": 1 00:15:30.626 }, 00:15:30.626 { 00:15:30.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.626 "dma_device_type": 2 00:15:30.626 } 00:15:30.626 ], 00:15:30.626 "driver_specific": {} 00:15:30.626 } 00:15:30.626 ] 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.626 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.627 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.627 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.627 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.627 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.627 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.627 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.887 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.887 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.887 "name": "Existed_Raid", 00:15:30.887 "uuid": "6398739a-6533-4d34-92e5-13315b48ccb2", 00:15:30.887 "strip_size_kb": 64, 00:15:30.887 "state": "configuring", 00:15:30.887 "raid_level": "raid5f", 00:15:30.887 "superblock": true, 00:15:30.887 "num_base_bdevs": 4, 00:15:30.887 "num_base_bdevs_discovered": 3, 00:15:30.887 "num_base_bdevs_operational": 4, 00:15:30.887 "base_bdevs_list": [ 00:15:30.887 { 00:15:30.887 "name": "BaseBdev1", 00:15:30.887 "uuid": "a46a5df6-e5bc-4b1b-9e38-e3e648f86485", 00:15:30.887 "is_configured": true, 00:15:30.887 "data_offset": 2048, 00:15:30.887 "data_size": 63488 00:15:30.887 }, 00:15:30.887 { 00:15:30.887 "name": "BaseBdev2", 00:15:30.887 "uuid": "51be063e-92b0-435e-b9ff-97cdcef77c7b", 00:15:30.887 "is_configured": true, 00:15:30.887 "data_offset": 2048, 00:15:30.887 "data_size": 63488 00:15:30.887 }, 00:15:30.887 { 00:15:30.887 "name": "BaseBdev3", 00:15:30.887 "uuid": "2310032e-1318-4810-9283-b4a10bc9453f", 00:15:30.887 "is_configured": true, 00:15:30.887 "data_offset": 2048, 00:15:30.887 "data_size": 63488 00:15:30.887 }, 00:15:30.887 { 00:15:30.887 "name": "BaseBdev4", 00:15:30.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.887 "is_configured": false, 00:15:30.887 "data_offset": 0, 00:15:30.887 "data_size": 0 00:15:30.887 } 00:15:30.887 ] 00:15:30.887 }' 00:15:30.887 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.887 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.147 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:31.147 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.147 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.147 [2024-10-13 02:29:49.776077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:31.147 [2024-10-13 02:29:49.776395] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:31.147 [2024-10-13 02:29:49.776448] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:31.147 [2024-10-13 02:29:49.776738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:31.147 BaseBdev4 00:15:31.147 [2024-10-13 02:29:49.777255] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:31.147 [2024-10-13 02:29:49.777328] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:31.147 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.147 [2024-10-13 02:29:49.777492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.147 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:31.147 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:31.147 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:31.147 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:31.147 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:31.147 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:31.147 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.148 [ 00:15:31.148 { 00:15:31.148 "name": "BaseBdev4", 00:15:31.148 "aliases": [ 00:15:31.148 "f2c14e5a-9cef-449d-abc3-f9c8028f51aa" 00:15:31.148 ], 00:15:31.148 "product_name": "Malloc disk", 00:15:31.148 "block_size": 512, 00:15:31.148 "num_blocks": 65536, 00:15:31.148 "uuid": "f2c14e5a-9cef-449d-abc3-f9c8028f51aa", 00:15:31.148 "assigned_rate_limits": { 00:15:31.148 "rw_ios_per_sec": 0, 00:15:31.148 "rw_mbytes_per_sec": 0, 00:15:31.148 "r_mbytes_per_sec": 0, 00:15:31.148 "w_mbytes_per_sec": 0 00:15:31.148 }, 00:15:31.148 "claimed": true, 00:15:31.148 "claim_type": "exclusive_write", 00:15:31.148 "zoned": false, 00:15:31.148 "supported_io_types": { 00:15:31.148 "read": true, 00:15:31.148 "write": true, 00:15:31.148 "unmap": true, 00:15:31.148 "flush": true, 00:15:31.148 "reset": true, 00:15:31.148 "nvme_admin": false, 00:15:31.148 "nvme_io": false, 00:15:31.148 "nvme_io_md": false, 00:15:31.148 "write_zeroes": true, 00:15:31.148 "zcopy": true, 00:15:31.148 "get_zone_info": false, 00:15:31.148 "zone_management": false, 00:15:31.148 "zone_append": false, 00:15:31.148 "compare": false, 00:15:31.148 "compare_and_write": false, 00:15:31.148 "abort": true, 00:15:31.148 "seek_hole": false, 00:15:31.148 "seek_data": false, 00:15:31.148 "copy": true, 00:15:31.148 "nvme_iov_md": false 00:15:31.148 }, 00:15:31.148 "memory_domains": [ 00:15:31.148 { 00:15:31.148 "dma_device_id": "system", 00:15:31.148 "dma_device_type": 1 00:15:31.148 }, 00:15:31.148 { 00:15:31.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.148 "dma_device_type": 2 00:15:31.148 } 00:15:31.148 ], 00:15:31.148 "driver_specific": {} 00:15:31.148 } 00:15:31.148 ] 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.148 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.408 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.408 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.408 "name": "Existed_Raid", 00:15:31.408 "uuid": "6398739a-6533-4d34-92e5-13315b48ccb2", 00:15:31.408 "strip_size_kb": 64, 00:15:31.408 "state": "online", 00:15:31.408 "raid_level": "raid5f", 00:15:31.408 "superblock": true, 00:15:31.408 "num_base_bdevs": 4, 00:15:31.408 "num_base_bdevs_discovered": 4, 00:15:31.408 "num_base_bdevs_operational": 4, 00:15:31.408 "base_bdevs_list": [ 00:15:31.408 { 00:15:31.408 "name": "BaseBdev1", 00:15:31.408 "uuid": "a46a5df6-e5bc-4b1b-9e38-e3e648f86485", 00:15:31.408 "is_configured": true, 00:15:31.408 "data_offset": 2048, 00:15:31.408 "data_size": 63488 00:15:31.408 }, 00:15:31.408 { 00:15:31.408 "name": "BaseBdev2", 00:15:31.408 "uuid": "51be063e-92b0-435e-b9ff-97cdcef77c7b", 00:15:31.408 "is_configured": true, 00:15:31.408 "data_offset": 2048, 00:15:31.408 "data_size": 63488 00:15:31.408 }, 00:15:31.408 { 00:15:31.408 "name": "BaseBdev3", 00:15:31.408 "uuid": "2310032e-1318-4810-9283-b4a10bc9453f", 00:15:31.408 "is_configured": true, 00:15:31.408 "data_offset": 2048, 00:15:31.408 "data_size": 63488 00:15:31.408 }, 00:15:31.408 { 00:15:31.408 "name": "BaseBdev4", 00:15:31.408 "uuid": "f2c14e5a-9cef-449d-abc3-f9c8028f51aa", 00:15:31.408 "is_configured": true, 00:15:31.408 "data_offset": 2048, 00:15:31.408 "data_size": 63488 00:15:31.408 } 00:15:31.408 ] 00:15:31.408 }' 00:15:31.408 02:29:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.408 02:29:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.669 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:31.669 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:31.669 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:31.669 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:31.669 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:31.669 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:31.669 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:31.669 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.669 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.669 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:31.669 [2024-10-13 02:29:50.247641] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.669 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.669 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:31.669 "name": "Existed_Raid", 00:15:31.669 "aliases": [ 00:15:31.669 "6398739a-6533-4d34-92e5-13315b48ccb2" 00:15:31.669 ], 00:15:31.669 "product_name": "Raid Volume", 00:15:31.669 "block_size": 512, 00:15:31.669 "num_blocks": 190464, 00:15:31.669 "uuid": "6398739a-6533-4d34-92e5-13315b48ccb2", 00:15:31.669 "assigned_rate_limits": { 00:15:31.669 "rw_ios_per_sec": 0, 00:15:31.669 "rw_mbytes_per_sec": 0, 00:15:31.669 "r_mbytes_per_sec": 0, 00:15:31.669 "w_mbytes_per_sec": 0 00:15:31.669 }, 00:15:31.669 "claimed": false, 00:15:31.669 "zoned": false, 00:15:31.669 "supported_io_types": { 00:15:31.669 "read": true, 00:15:31.669 "write": true, 00:15:31.669 "unmap": false, 00:15:31.669 "flush": false, 00:15:31.669 "reset": true, 00:15:31.669 "nvme_admin": false, 00:15:31.669 "nvme_io": false, 00:15:31.669 "nvme_io_md": false, 00:15:31.669 "write_zeroes": true, 00:15:31.669 "zcopy": false, 00:15:31.669 "get_zone_info": false, 00:15:31.669 "zone_management": false, 00:15:31.669 "zone_append": false, 00:15:31.669 "compare": false, 00:15:31.669 "compare_and_write": false, 00:15:31.669 "abort": false, 00:15:31.669 "seek_hole": false, 00:15:31.669 "seek_data": false, 00:15:31.669 "copy": false, 00:15:31.669 "nvme_iov_md": false 00:15:31.669 }, 00:15:31.669 "driver_specific": { 00:15:31.669 "raid": { 00:15:31.669 "uuid": "6398739a-6533-4d34-92e5-13315b48ccb2", 00:15:31.669 "strip_size_kb": 64, 00:15:31.669 "state": "online", 00:15:31.669 "raid_level": "raid5f", 00:15:31.669 "superblock": true, 00:15:31.669 "num_base_bdevs": 4, 00:15:31.669 "num_base_bdevs_discovered": 4, 00:15:31.669 "num_base_bdevs_operational": 4, 00:15:31.669 "base_bdevs_list": [ 00:15:31.669 { 00:15:31.669 "name": "BaseBdev1", 00:15:31.669 "uuid": "a46a5df6-e5bc-4b1b-9e38-e3e648f86485", 00:15:31.669 "is_configured": true, 00:15:31.669 "data_offset": 2048, 00:15:31.669 "data_size": 63488 00:15:31.669 }, 00:15:31.669 { 00:15:31.669 "name": "BaseBdev2", 00:15:31.669 "uuid": "51be063e-92b0-435e-b9ff-97cdcef77c7b", 00:15:31.669 "is_configured": true, 00:15:31.669 "data_offset": 2048, 00:15:31.669 "data_size": 63488 00:15:31.669 }, 00:15:31.669 { 00:15:31.669 "name": "BaseBdev3", 00:15:31.669 "uuid": "2310032e-1318-4810-9283-b4a10bc9453f", 00:15:31.669 "is_configured": true, 00:15:31.669 "data_offset": 2048, 00:15:31.669 "data_size": 63488 00:15:31.669 }, 00:15:31.669 { 00:15:31.669 "name": "BaseBdev4", 00:15:31.669 "uuid": "f2c14e5a-9cef-449d-abc3-f9c8028f51aa", 00:15:31.669 "is_configured": true, 00:15:31.669 "data_offset": 2048, 00:15:31.669 "data_size": 63488 00:15:31.669 } 00:15:31.669 ] 00:15:31.669 } 00:15:31.669 } 00:15:31.669 }' 00:15:31.669 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:31.669 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:31.669 BaseBdev2 00:15:31.669 BaseBdev3 00:15:31.669 BaseBdev4' 00:15:31.669 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.930 [2024-10-13 02:29:50.566994] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.930 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.191 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.191 "name": "Existed_Raid", 00:15:32.191 "uuid": "6398739a-6533-4d34-92e5-13315b48ccb2", 00:15:32.191 "strip_size_kb": 64, 00:15:32.191 "state": "online", 00:15:32.191 "raid_level": "raid5f", 00:15:32.191 "superblock": true, 00:15:32.191 "num_base_bdevs": 4, 00:15:32.191 "num_base_bdevs_discovered": 3, 00:15:32.191 "num_base_bdevs_operational": 3, 00:15:32.191 "base_bdevs_list": [ 00:15:32.191 { 00:15:32.191 "name": null, 00:15:32.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.191 "is_configured": false, 00:15:32.191 "data_offset": 0, 00:15:32.191 "data_size": 63488 00:15:32.191 }, 00:15:32.191 { 00:15:32.191 "name": "BaseBdev2", 00:15:32.191 "uuid": "51be063e-92b0-435e-b9ff-97cdcef77c7b", 00:15:32.191 "is_configured": true, 00:15:32.191 "data_offset": 2048, 00:15:32.191 "data_size": 63488 00:15:32.191 }, 00:15:32.191 { 00:15:32.191 "name": "BaseBdev3", 00:15:32.191 "uuid": "2310032e-1318-4810-9283-b4a10bc9453f", 00:15:32.191 "is_configured": true, 00:15:32.191 "data_offset": 2048, 00:15:32.191 "data_size": 63488 00:15:32.191 }, 00:15:32.191 { 00:15:32.191 "name": "BaseBdev4", 00:15:32.191 "uuid": "f2c14e5a-9cef-449d-abc3-f9c8028f51aa", 00:15:32.191 "is_configured": true, 00:15:32.191 "data_offset": 2048, 00:15:32.191 "data_size": 63488 00:15:32.191 } 00:15:32.191 ] 00:15:32.191 }' 00:15:32.191 02:29:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.191 02:29:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.451 [2024-10-13 02:29:51.089556] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:32.451 [2024-10-13 02:29:51.089807] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:32.451 [2024-10-13 02:29:51.100882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:32.451 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.712 [2024-10-13 02:29:51.160842] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.712 [2024-10-13 02:29:51.232070] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:32.712 [2024-10-13 02:29:51.232130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.712 BaseBdev2 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.712 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.712 [ 00:15:32.712 { 00:15:32.712 "name": "BaseBdev2", 00:15:32.712 "aliases": [ 00:15:32.712 "68bf58e3-3785-4a7d-b30e-d36e69b89b48" 00:15:32.712 ], 00:15:32.712 "product_name": "Malloc disk", 00:15:32.712 "block_size": 512, 00:15:32.712 "num_blocks": 65536, 00:15:32.712 "uuid": "68bf58e3-3785-4a7d-b30e-d36e69b89b48", 00:15:32.712 "assigned_rate_limits": { 00:15:32.712 "rw_ios_per_sec": 0, 00:15:32.712 "rw_mbytes_per_sec": 0, 00:15:32.712 "r_mbytes_per_sec": 0, 00:15:32.712 "w_mbytes_per_sec": 0 00:15:32.712 }, 00:15:32.712 "claimed": false, 00:15:32.712 "zoned": false, 00:15:32.712 "supported_io_types": { 00:15:32.712 "read": true, 00:15:32.712 "write": true, 00:15:32.712 "unmap": true, 00:15:32.712 "flush": true, 00:15:32.712 "reset": true, 00:15:32.712 "nvme_admin": false, 00:15:32.712 "nvme_io": false, 00:15:32.712 "nvme_io_md": false, 00:15:32.712 "write_zeroes": true, 00:15:32.712 "zcopy": true, 00:15:32.712 "get_zone_info": false, 00:15:32.712 "zone_management": false, 00:15:32.712 "zone_append": false, 00:15:32.712 "compare": false, 00:15:32.712 "compare_and_write": false, 00:15:32.712 "abort": true, 00:15:32.712 "seek_hole": false, 00:15:32.712 "seek_data": false, 00:15:32.712 "copy": true, 00:15:32.712 "nvme_iov_md": false 00:15:32.712 }, 00:15:32.712 "memory_domains": [ 00:15:32.712 { 00:15:32.712 "dma_device_id": "system", 00:15:32.712 "dma_device_type": 1 00:15:32.712 }, 00:15:32.712 { 00:15:32.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.712 "dma_device_type": 2 00:15:32.712 } 00:15:32.712 ], 00:15:32.712 "driver_specific": {} 00:15:32.712 } 00:15:32.712 ] 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.713 BaseBdev3 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.713 [ 00:15:32.713 { 00:15:32.713 "name": "BaseBdev3", 00:15:32.713 "aliases": [ 00:15:32.713 "b25e86c3-b965-405c-afc0-fa83cae9588e" 00:15:32.713 ], 00:15:32.713 "product_name": "Malloc disk", 00:15:32.713 "block_size": 512, 00:15:32.713 "num_blocks": 65536, 00:15:32.713 "uuid": "b25e86c3-b965-405c-afc0-fa83cae9588e", 00:15:32.713 "assigned_rate_limits": { 00:15:32.713 "rw_ios_per_sec": 0, 00:15:32.713 "rw_mbytes_per_sec": 0, 00:15:32.713 "r_mbytes_per_sec": 0, 00:15:32.713 "w_mbytes_per_sec": 0 00:15:32.713 }, 00:15:32.713 "claimed": false, 00:15:32.713 "zoned": false, 00:15:32.713 "supported_io_types": { 00:15:32.713 "read": true, 00:15:32.713 "write": true, 00:15:32.713 "unmap": true, 00:15:32.713 "flush": true, 00:15:32.713 "reset": true, 00:15:32.713 "nvme_admin": false, 00:15:32.713 "nvme_io": false, 00:15:32.713 "nvme_io_md": false, 00:15:32.713 "write_zeroes": true, 00:15:32.713 "zcopy": true, 00:15:32.713 "get_zone_info": false, 00:15:32.713 "zone_management": false, 00:15:32.713 "zone_append": false, 00:15:32.713 "compare": false, 00:15:32.713 "compare_and_write": false, 00:15:32.713 "abort": true, 00:15:32.713 "seek_hole": false, 00:15:32.713 "seek_data": false, 00:15:32.713 "copy": true, 00:15:32.713 "nvme_iov_md": false 00:15:32.713 }, 00:15:32.713 "memory_domains": [ 00:15:32.713 { 00:15:32.713 "dma_device_id": "system", 00:15:32.713 "dma_device_type": 1 00:15:32.713 }, 00:15:32.713 { 00:15:32.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.713 "dma_device_type": 2 00:15:32.713 } 00:15:32.713 ], 00:15:32.713 "driver_specific": {} 00:15:32.713 } 00:15:32.713 ] 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.713 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.973 BaseBdev4 00:15:32.973 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.973 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:32.973 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:32.973 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:32.973 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:32.973 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:32.973 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:32.973 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:32.973 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.974 [ 00:15:32.974 { 00:15:32.974 "name": "BaseBdev4", 00:15:32.974 "aliases": [ 00:15:32.974 "7e3bd6e8-cbf2-4886-8964-2756833388aa" 00:15:32.974 ], 00:15:32.974 "product_name": "Malloc disk", 00:15:32.974 "block_size": 512, 00:15:32.974 "num_blocks": 65536, 00:15:32.974 "uuid": "7e3bd6e8-cbf2-4886-8964-2756833388aa", 00:15:32.974 "assigned_rate_limits": { 00:15:32.974 "rw_ios_per_sec": 0, 00:15:32.974 "rw_mbytes_per_sec": 0, 00:15:32.974 "r_mbytes_per_sec": 0, 00:15:32.974 "w_mbytes_per_sec": 0 00:15:32.974 }, 00:15:32.974 "claimed": false, 00:15:32.974 "zoned": false, 00:15:32.974 "supported_io_types": { 00:15:32.974 "read": true, 00:15:32.974 "write": true, 00:15:32.974 "unmap": true, 00:15:32.974 "flush": true, 00:15:32.974 "reset": true, 00:15:32.974 "nvme_admin": false, 00:15:32.974 "nvme_io": false, 00:15:32.974 "nvme_io_md": false, 00:15:32.974 "write_zeroes": true, 00:15:32.974 "zcopy": true, 00:15:32.974 "get_zone_info": false, 00:15:32.974 "zone_management": false, 00:15:32.974 "zone_append": false, 00:15:32.974 "compare": false, 00:15:32.974 "compare_and_write": false, 00:15:32.974 "abort": true, 00:15:32.974 "seek_hole": false, 00:15:32.974 "seek_data": false, 00:15:32.974 "copy": true, 00:15:32.974 "nvme_iov_md": false 00:15:32.974 }, 00:15:32.974 "memory_domains": [ 00:15:32.974 { 00:15:32.974 "dma_device_id": "system", 00:15:32.974 "dma_device_type": 1 00:15:32.974 }, 00:15:32.974 { 00:15:32.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.974 "dma_device_type": 2 00:15:32.974 } 00:15:32.974 ], 00:15:32.974 "driver_specific": {} 00:15:32.974 } 00:15:32.974 ] 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.974 [2024-10-13 02:29:51.437424] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:32.974 [2024-10-13 02:29:51.437567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:32.974 [2024-10-13 02:29:51.437610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.974 [2024-10-13 02:29:51.439453] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:32.974 [2024-10-13 02:29:51.439549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.974 "name": "Existed_Raid", 00:15:32.974 "uuid": "f2f64cae-ff13-40ca-83fd-0d43a7ed2e6c", 00:15:32.974 "strip_size_kb": 64, 00:15:32.974 "state": "configuring", 00:15:32.974 "raid_level": "raid5f", 00:15:32.974 "superblock": true, 00:15:32.974 "num_base_bdevs": 4, 00:15:32.974 "num_base_bdevs_discovered": 3, 00:15:32.974 "num_base_bdevs_operational": 4, 00:15:32.974 "base_bdevs_list": [ 00:15:32.974 { 00:15:32.974 "name": "BaseBdev1", 00:15:32.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.974 "is_configured": false, 00:15:32.974 "data_offset": 0, 00:15:32.974 "data_size": 0 00:15:32.974 }, 00:15:32.974 { 00:15:32.974 "name": "BaseBdev2", 00:15:32.974 "uuid": "68bf58e3-3785-4a7d-b30e-d36e69b89b48", 00:15:32.974 "is_configured": true, 00:15:32.974 "data_offset": 2048, 00:15:32.974 "data_size": 63488 00:15:32.974 }, 00:15:32.974 { 00:15:32.974 "name": "BaseBdev3", 00:15:32.974 "uuid": "b25e86c3-b965-405c-afc0-fa83cae9588e", 00:15:32.974 "is_configured": true, 00:15:32.974 "data_offset": 2048, 00:15:32.974 "data_size": 63488 00:15:32.974 }, 00:15:32.974 { 00:15:32.974 "name": "BaseBdev4", 00:15:32.974 "uuid": "7e3bd6e8-cbf2-4886-8964-2756833388aa", 00:15:32.974 "is_configured": true, 00:15:32.974 "data_offset": 2048, 00:15:32.974 "data_size": 63488 00:15:32.974 } 00:15:32.974 ] 00:15:32.974 }' 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.974 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.234 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:33.234 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.234 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.234 [2024-10-13 02:29:51.860721] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:33.234 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.234 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:33.234 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.234 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.234 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.234 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.234 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.234 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.235 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.235 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.235 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.235 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.235 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.235 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.235 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.235 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.235 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.235 "name": "Existed_Raid", 00:15:33.235 "uuid": "f2f64cae-ff13-40ca-83fd-0d43a7ed2e6c", 00:15:33.235 "strip_size_kb": 64, 00:15:33.235 "state": "configuring", 00:15:33.235 "raid_level": "raid5f", 00:15:33.235 "superblock": true, 00:15:33.235 "num_base_bdevs": 4, 00:15:33.235 "num_base_bdevs_discovered": 2, 00:15:33.235 "num_base_bdevs_operational": 4, 00:15:33.235 "base_bdevs_list": [ 00:15:33.235 { 00:15:33.235 "name": "BaseBdev1", 00:15:33.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.235 "is_configured": false, 00:15:33.235 "data_offset": 0, 00:15:33.235 "data_size": 0 00:15:33.235 }, 00:15:33.235 { 00:15:33.235 "name": null, 00:15:33.235 "uuid": "68bf58e3-3785-4a7d-b30e-d36e69b89b48", 00:15:33.235 "is_configured": false, 00:15:33.235 "data_offset": 0, 00:15:33.235 "data_size": 63488 00:15:33.235 }, 00:15:33.235 { 00:15:33.235 "name": "BaseBdev3", 00:15:33.235 "uuid": "b25e86c3-b965-405c-afc0-fa83cae9588e", 00:15:33.235 "is_configured": true, 00:15:33.235 "data_offset": 2048, 00:15:33.235 "data_size": 63488 00:15:33.235 }, 00:15:33.235 { 00:15:33.235 "name": "BaseBdev4", 00:15:33.235 "uuid": "7e3bd6e8-cbf2-4886-8964-2756833388aa", 00:15:33.235 "is_configured": true, 00:15:33.235 "data_offset": 2048, 00:15:33.235 "data_size": 63488 00:15:33.235 } 00:15:33.235 ] 00:15:33.235 }' 00:15:33.235 02:29:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.235 02:29:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.805 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.805 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:33.805 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.805 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.805 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.805 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:33.805 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:33.805 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.805 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.805 [2024-10-13 02:29:52.366783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.805 BaseBdev1 00:15:33.805 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.805 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:33.805 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:33.805 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:33.805 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:33.805 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:33.805 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.806 [ 00:15:33.806 { 00:15:33.806 "name": "BaseBdev1", 00:15:33.806 "aliases": [ 00:15:33.806 "26246179-accc-4627-a8f3-a8a9fb35da3b" 00:15:33.806 ], 00:15:33.806 "product_name": "Malloc disk", 00:15:33.806 "block_size": 512, 00:15:33.806 "num_blocks": 65536, 00:15:33.806 "uuid": "26246179-accc-4627-a8f3-a8a9fb35da3b", 00:15:33.806 "assigned_rate_limits": { 00:15:33.806 "rw_ios_per_sec": 0, 00:15:33.806 "rw_mbytes_per_sec": 0, 00:15:33.806 "r_mbytes_per_sec": 0, 00:15:33.806 "w_mbytes_per_sec": 0 00:15:33.806 }, 00:15:33.806 "claimed": true, 00:15:33.806 "claim_type": "exclusive_write", 00:15:33.806 "zoned": false, 00:15:33.806 "supported_io_types": { 00:15:33.806 "read": true, 00:15:33.806 "write": true, 00:15:33.806 "unmap": true, 00:15:33.806 "flush": true, 00:15:33.806 "reset": true, 00:15:33.806 "nvme_admin": false, 00:15:33.806 "nvme_io": false, 00:15:33.806 "nvme_io_md": false, 00:15:33.806 "write_zeroes": true, 00:15:33.806 "zcopy": true, 00:15:33.806 "get_zone_info": false, 00:15:33.806 "zone_management": false, 00:15:33.806 "zone_append": false, 00:15:33.806 "compare": false, 00:15:33.806 "compare_and_write": false, 00:15:33.806 "abort": true, 00:15:33.806 "seek_hole": false, 00:15:33.806 "seek_data": false, 00:15:33.806 "copy": true, 00:15:33.806 "nvme_iov_md": false 00:15:33.806 }, 00:15:33.806 "memory_domains": [ 00:15:33.806 { 00:15:33.806 "dma_device_id": "system", 00:15:33.806 "dma_device_type": 1 00:15:33.806 }, 00:15:33.806 { 00:15:33.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.806 "dma_device_type": 2 00:15:33.806 } 00:15:33.806 ], 00:15:33.806 "driver_specific": {} 00:15:33.806 } 00:15:33.806 ] 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.806 "name": "Existed_Raid", 00:15:33.806 "uuid": "f2f64cae-ff13-40ca-83fd-0d43a7ed2e6c", 00:15:33.806 "strip_size_kb": 64, 00:15:33.806 "state": "configuring", 00:15:33.806 "raid_level": "raid5f", 00:15:33.806 "superblock": true, 00:15:33.806 "num_base_bdevs": 4, 00:15:33.806 "num_base_bdevs_discovered": 3, 00:15:33.806 "num_base_bdevs_operational": 4, 00:15:33.806 "base_bdevs_list": [ 00:15:33.806 { 00:15:33.806 "name": "BaseBdev1", 00:15:33.806 "uuid": "26246179-accc-4627-a8f3-a8a9fb35da3b", 00:15:33.806 "is_configured": true, 00:15:33.806 "data_offset": 2048, 00:15:33.806 "data_size": 63488 00:15:33.806 }, 00:15:33.806 { 00:15:33.806 "name": null, 00:15:33.806 "uuid": "68bf58e3-3785-4a7d-b30e-d36e69b89b48", 00:15:33.806 "is_configured": false, 00:15:33.806 "data_offset": 0, 00:15:33.806 "data_size": 63488 00:15:33.806 }, 00:15:33.806 { 00:15:33.806 "name": "BaseBdev3", 00:15:33.806 "uuid": "b25e86c3-b965-405c-afc0-fa83cae9588e", 00:15:33.806 "is_configured": true, 00:15:33.806 "data_offset": 2048, 00:15:33.806 "data_size": 63488 00:15:33.806 }, 00:15:33.806 { 00:15:33.806 "name": "BaseBdev4", 00:15:33.806 "uuid": "7e3bd6e8-cbf2-4886-8964-2756833388aa", 00:15:33.806 "is_configured": true, 00:15:33.806 "data_offset": 2048, 00:15:33.806 "data_size": 63488 00:15:33.806 } 00:15:33.806 ] 00:15:33.806 }' 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.806 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.376 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.376 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:34.376 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.376 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.376 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.376 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:34.376 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:34.376 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.376 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.377 [2024-10-13 02:29:52.842051] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.377 "name": "Existed_Raid", 00:15:34.377 "uuid": "f2f64cae-ff13-40ca-83fd-0d43a7ed2e6c", 00:15:34.377 "strip_size_kb": 64, 00:15:34.377 "state": "configuring", 00:15:34.377 "raid_level": "raid5f", 00:15:34.377 "superblock": true, 00:15:34.377 "num_base_bdevs": 4, 00:15:34.377 "num_base_bdevs_discovered": 2, 00:15:34.377 "num_base_bdevs_operational": 4, 00:15:34.377 "base_bdevs_list": [ 00:15:34.377 { 00:15:34.377 "name": "BaseBdev1", 00:15:34.377 "uuid": "26246179-accc-4627-a8f3-a8a9fb35da3b", 00:15:34.377 "is_configured": true, 00:15:34.377 "data_offset": 2048, 00:15:34.377 "data_size": 63488 00:15:34.377 }, 00:15:34.377 { 00:15:34.377 "name": null, 00:15:34.377 "uuid": "68bf58e3-3785-4a7d-b30e-d36e69b89b48", 00:15:34.377 "is_configured": false, 00:15:34.377 "data_offset": 0, 00:15:34.377 "data_size": 63488 00:15:34.377 }, 00:15:34.377 { 00:15:34.377 "name": null, 00:15:34.377 "uuid": "b25e86c3-b965-405c-afc0-fa83cae9588e", 00:15:34.377 "is_configured": false, 00:15:34.377 "data_offset": 0, 00:15:34.377 "data_size": 63488 00:15:34.377 }, 00:15:34.377 { 00:15:34.377 "name": "BaseBdev4", 00:15:34.377 "uuid": "7e3bd6e8-cbf2-4886-8964-2756833388aa", 00:15:34.377 "is_configured": true, 00:15:34.377 "data_offset": 2048, 00:15:34.377 "data_size": 63488 00:15:34.377 } 00:15:34.377 ] 00:15:34.377 }' 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.377 02:29:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.637 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.637 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.637 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.637 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:34.637 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.897 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:34.897 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:34.897 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.897 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.897 [2024-10-13 02:29:53.333263] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:34.897 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.897 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:34.897 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.897 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.898 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.898 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.898 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.898 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.898 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.898 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.898 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.898 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.898 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.898 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.898 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.898 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.898 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.898 "name": "Existed_Raid", 00:15:34.898 "uuid": "f2f64cae-ff13-40ca-83fd-0d43a7ed2e6c", 00:15:34.898 "strip_size_kb": 64, 00:15:34.898 "state": "configuring", 00:15:34.898 "raid_level": "raid5f", 00:15:34.898 "superblock": true, 00:15:34.898 "num_base_bdevs": 4, 00:15:34.898 "num_base_bdevs_discovered": 3, 00:15:34.898 "num_base_bdevs_operational": 4, 00:15:34.898 "base_bdevs_list": [ 00:15:34.898 { 00:15:34.898 "name": "BaseBdev1", 00:15:34.898 "uuid": "26246179-accc-4627-a8f3-a8a9fb35da3b", 00:15:34.898 "is_configured": true, 00:15:34.898 "data_offset": 2048, 00:15:34.898 "data_size": 63488 00:15:34.898 }, 00:15:34.898 { 00:15:34.898 "name": null, 00:15:34.898 "uuid": "68bf58e3-3785-4a7d-b30e-d36e69b89b48", 00:15:34.898 "is_configured": false, 00:15:34.898 "data_offset": 0, 00:15:34.898 "data_size": 63488 00:15:34.898 }, 00:15:34.898 { 00:15:34.898 "name": "BaseBdev3", 00:15:34.898 "uuid": "b25e86c3-b965-405c-afc0-fa83cae9588e", 00:15:34.898 "is_configured": true, 00:15:34.898 "data_offset": 2048, 00:15:34.898 "data_size": 63488 00:15:34.898 }, 00:15:34.898 { 00:15:34.898 "name": "BaseBdev4", 00:15:34.898 "uuid": "7e3bd6e8-cbf2-4886-8964-2756833388aa", 00:15:34.898 "is_configured": true, 00:15:34.898 "data_offset": 2048, 00:15:34.898 "data_size": 63488 00:15:34.898 } 00:15:34.898 ] 00:15:34.898 }' 00:15:34.898 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.898 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.158 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.158 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:35.158 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.158 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.158 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.418 [2024-10-13 02:29:53.852536] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.418 "name": "Existed_Raid", 00:15:35.418 "uuid": "f2f64cae-ff13-40ca-83fd-0d43a7ed2e6c", 00:15:35.418 "strip_size_kb": 64, 00:15:35.418 "state": "configuring", 00:15:35.418 "raid_level": "raid5f", 00:15:35.418 "superblock": true, 00:15:35.418 "num_base_bdevs": 4, 00:15:35.418 "num_base_bdevs_discovered": 2, 00:15:35.418 "num_base_bdevs_operational": 4, 00:15:35.418 "base_bdevs_list": [ 00:15:35.418 { 00:15:35.418 "name": null, 00:15:35.418 "uuid": "26246179-accc-4627-a8f3-a8a9fb35da3b", 00:15:35.418 "is_configured": false, 00:15:35.418 "data_offset": 0, 00:15:35.418 "data_size": 63488 00:15:35.418 }, 00:15:35.418 { 00:15:35.418 "name": null, 00:15:35.418 "uuid": "68bf58e3-3785-4a7d-b30e-d36e69b89b48", 00:15:35.418 "is_configured": false, 00:15:35.418 "data_offset": 0, 00:15:35.418 "data_size": 63488 00:15:35.418 }, 00:15:35.418 { 00:15:35.418 "name": "BaseBdev3", 00:15:35.418 "uuid": "b25e86c3-b965-405c-afc0-fa83cae9588e", 00:15:35.418 "is_configured": true, 00:15:35.418 "data_offset": 2048, 00:15:35.418 "data_size": 63488 00:15:35.418 }, 00:15:35.418 { 00:15:35.418 "name": "BaseBdev4", 00:15:35.418 "uuid": "7e3bd6e8-cbf2-4886-8964-2756833388aa", 00:15:35.418 "is_configured": true, 00:15:35.418 "data_offset": 2048, 00:15:35.418 "data_size": 63488 00:15:35.418 } 00:15:35.418 ] 00:15:35.418 }' 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.418 02:29:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.679 [2024-10-13 02:29:54.350168] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.679 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.939 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.939 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.939 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.939 "name": "Existed_Raid", 00:15:35.939 "uuid": "f2f64cae-ff13-40ca-83fd-0d43a7ed2e6c", 00:15:35.939 "strip_size_kb": 64, 00:15:35.939 "state": "configuring", 00:15:35.939 "raid_level": "raid5f", 00:15:35.939 "superblock": true, 00:15:35.939 "num_base_bdevs": 4, 00:15:35.939 "num_base_bdevs_discovered": 3, 00:15:35.939 "num_base_bdevs_operational": 4, 00:15:35.939 "base_bdevs_list": [ 00:15:35.939 { 00:15:35.939 "name": null, 00:15:35.939 "uuid": "26246179-accc-4627-a8f3-a8a9fb35da3b", 00:15:35.939 "is_configured": false, 00:15:35.939 "data_offset": 0, 00:15:35.939 "data_size": 63488 00:15:35.939 }, 00:15:35.939 { 00:15:35.939 "name": "BaseBdev2", 00:15:35.939 "uuid": "68bf58e3-3785-4a7d-b30e-d36e69b89b48", 00:15:35.939 "is_configured": true, 00:15:35.939 "data_offset": 2048, 00:15:35.939 "data_size": 63488 00:15:35.939 }, 00:15:35.939 { 00:15:35.939 "name": "BaseBdev3", 00:15:35.939 "uuid": "b25e86c3-b965-405c-afc0-fa83cae9588e", 00:15:35.939 "is_configured": true, 00:15:35.939 "data_offset": 2048, 00:15:35.939 "data_size": 63488 00:15:35.939 }, 00:15:35.939 { 00:15:35.939 "name": "BaseBdev4", 00:15:35.939 "uuid": "7e3bd6e8-cbf2-4886-8964-2756833388aa", 00:15:35.939 "is_configured": true, 00:15:35.939 "data_offset": 2048, 00:15:35.939 "data_size": 63488 00:15:35.939 } 00:15:35.939 ] 00:15:35.939 }' 00:15:35.939 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.939 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.199 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.199 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:36.199 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.199 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.199 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.199 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:36.199 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.199 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.199 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.199 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:36.199 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 26246179-accc-4627-a8f3-a8a9fb35da3b 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.460 [2024-10-13 02:29:54.900152] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:36.460 [2024-10-13 02:29:54.900353] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:36.460 [2024-10-13 02:29:54.900365] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:36.460 [2024-10-13 02:29:54.900621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:15:36.460 NewBaseBdev 00:15:36.460 [2024-10-13 02:29:54.901103] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:36.460 [2024-10-13 02:29:54.901126] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:15:36.460 [2024-10-13 02:29:54.901225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.460 [ 00:15:36.460 { 00:15:36.460 "name": "NewBaseBdev", 00:15:36.460 "aliases": [ 00:15:36.460 "26246179-accc-4627-a8f3-a8a9fb35da3b" 00:15:36.460 ], 00:15:36.460 "product_name": "Malloc disk", 00:15:36.460 "block_size": 512, 00:15:36.460 "num_blocks": 65536, 00:15:36.460 "uuid": "26246179-accc-4627-a8f3-a8a9fb35da3b", 00:15:36.460 "assigned_rate_limits": { 00:15:36.460 "rw_ios_per_sec": 0, 00:15:36.460 "rw_mbytes_per_sec": 0, 00:15:36.460 "r_mbytes_per_sec": 0, 00:15:36.460 "w_mbytes_per_sec": 0 00:15:36.460 }, 00:15:36.460 "claimed": true, 00:15:36.460 "claim_type": "exclusive_write", 00:15:36.460 "zoned": false, 00:15:36.460 "supported_io_types": { 00:15:36.460 "read": true, 00:15:36.460 "write": true, 00:15:36.460 "unmap": true, 00:15:36.460 "flush": true, 00:15:36.460 "reset": true, 00:15:36.460 "nvme_admin": false, 00:15:36.460 "nvme_io": false, 00:15:36.460 "nvme_io_md": false, 00:15:36.460 "write_zeroes": true, 00:15:36.460 "zcopy": true, 00:15:36.460 "get_zone_info": false, 00:15:36.460 "zone_management": false, 00:15:36.460 "zone_append": false, 00:15:36.460 "compare": false, 00:15:36.460 "compare_and_write": false, 00:15:36.460 "abort": true, 00:15:36.460 "seek_hole": false, 00:15:36.460 "seek_data": false, 00:15:36.460 "copy": true, 00:15:36.460 "nvme_iov_md": false 00:15:36.460 }, 00:15:36.460 "memory_domains": [ 00:15:36.460 { 00:15:36.460 "dma_device_id": "system", 00:15:36.460 "dma_device_type": 1 00:15:36.460 }, 00:15:36.460 { 00:15:36.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.460 "dma_device_type": 2 00:15:36.460 } 00:15:36.460 ], 00:15:36.460 "driver_specific": {} 00:15:36.460 } 00:15:36.460 ] 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.460 "name": "Existed_Raid", 00:15:36.460 "uuid": "f2f64cae-ff13-40ca-83fd-0d43a7ed2e6c", 00:15:36.460 "strip_size_kb": 64, 00:15:36.460 "state": "online", 00:15:36.460 "raid_level": "raid5f", 00:15:36.460 "superblock": true, 00:15:36.460 "num_base_bdevs": 4, 00:15:36.460 "num_base_bdevs_discovered": 4, 00:15:36.460 "num_base_bdevs_operational": 4, 00:15:36.460 "base_bdevs_list": [ 00:15:36.460 { 00:15:36.460 "name": "NewBaseBdev", 00:15:36.460 "uuid": "26246179-accc-4627-a8f3-a8a9fb35da3b", 00:15:36.460 "is_configured": true, 00:15:36.460 "data_offset": 2048, 00:15:36.460 "data_size": 63488 00:15:36.460 }, 00:15:36.460 { 00:15:36.460 "name": "BaseBdev2", 00:15:36.460 "uuid": "68bf58e3-3785-4a7d-b30e-d36e69b89b48", 00:15:36.460 "is_configured": true, 00:15:36.460 "data_offset": 2048, 00:15:36.460 "data_size": 63488 00:15:36.460 }, 00:15:36.460 { 00:15:36.460 "name": "BaseBdev3", 00:15:36.460 "uuid": "b25e86c3-b965-405c-afc0-fa83cae9588e", 00:15:36.460 "is_configured": true, 00:15:36.460 "data_offset": 2048, 00:15:36.460 "data_size": 63488 00:15:36.460 }, 00:15:36.460 { 00:15:36.460 "name": "BaseBdev4", 00:15:36.460 "uuid": "7e3bd6e8-cbf2-4886-8964-2756833388aa", 00:15:36.460 "is_configured": true, 00:15:36.460 "data_offset": 2048, 00:15:36.460 "data_size": 63488 00:15:36.460 } 00:15:36.460 ] 00:15:36.460 }' 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.460 02:29:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.720 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:36.720 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:36.720 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:36.720 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:36.720 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:36.720 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:36.720 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:36.720 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:36.720 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.720 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.720 [2024-10-13 02:29:55.383638] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.019 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.019 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:37.019 "name": "Existed_Raid", 00:15:37.019 "aliases": [ 00:15:37.019 "f2f64cae-ff13-40ca-83fd-0d43a7ed2e6c" 00:15:37.019 ], 00:15:37.019 "product_name": "Raid Volume", 00:15:37.019 "block_size": 512, 00:15:37.019 "num_blocks": 190464, 00:15:37.019 "uuid": "f2f64cae-ff13-40ca-83fd-0d43a7ed2e6c", 00:15:37.019 "assigned_rate_limits": { 00:15:37.019 "rw_ios_per_sec": 0, 00:15:37.019 "rw_mbytes_per_sec": 0, 00:15:37.019 "r_mbytes_per_sec": 0, 00:15:37.019 "w_mbytes_per_sec": 0 00:15:37.019 }, 00:15:37.019 "claimed": false, 00:15:37.019 "zoned": false, 00:15:37.019 "supported_io_types": { 00:15:37.019 "read": true, 00:15:37.019 "write": true, 00:15:37.019 "unmap": false, 00:15:37.019 "flush": false, 00:15:37.019 "reset": true, 00:15:37.019 "nvme_admin": false, 00:15:37.020 "nvme_io": false, 00:15:37.020 "nvme_io_md": false, 00:15:37.020 "write_zeroes": true, 00:15:37.020 "zcopy": false, 00:15:37.020 "get_zone_info": false, 00:15:37.020 "zone_management": false, 00:15:37.020 "zone_append": false, 00:15:37.020 "compare": false, 00:15:37.020 "compare_and_write": false, 00:15:37.020 "abort": false, 00:15:37.020 "seek_hole": false, 00:15:37.020 "seek_data": false, 00:15:37.020 "copy": false, 00:15:37.020 "nvme_iov_md": false 00:15:37.020 }, 00:15:37.020 "driver_specific": { 00:15:37.020 "raid": { 00:15:37.020 "uuid": "f2f64cae-ff13-40ca-83fd-0d43a7ed2e6c", 00:15:37.020 "strip_size_kb": 64, 00:15:37.020 "state": "online", 00:15:37.020 "raid_level": "raid5f", 00:15:37.020 "superblock": true, 00:15:37.020 "num_base_bdevs": 4, 00:15:37.020 "num_base_bdevs_discovered": 4, 00:15:37.020 "num_base_bdevs_operational": 4, 00:15:37.020 "base_bdevs_list": [ 00:15:37.020 { 00:15:37.020 "name": "NewBaseBdev", 00:15:37.020 "uuid": "26246179-accc-4627-a8f3-a8a9fb35da3b", 00:15:37.020 "is_configured": true, 00:15:37.020 "data_offset": 2048, 00:15:37.020 "data_size": 63488 00:15:37.020 }, 00:15:37.020 { 00:15:37.020 "name": "BaseBdev2", 00:15:37.020 "uuid": "68bf58e3-3785-4a7d-b30e-d36e69b89b48", 00:15:37.020 "is_configured": true, 00:15:37.020 "data_offset": 2048, 00:15:37.020 "data_size": 63488 00:15:37.020 }, 00:15:37.020 { 00:15:37.020 "name": "BaseBdev3", 00:15:37.020 "uuid": "b25e86c3-b965-405c-afc0-fa83cae9588e", 00:15:37.020 "is_configured": true, 00:15:37.020 "data_offset": 2048, 00:15:37.020 "data_size": 63488 00:15:37.020 }, 00:15:37.020 { 00:15:37.020 "name": "BaseBdev4", 00:15:37.020 "uuid": "7e3bd6e8-cbf2-4886-8964-2756833388aa", 00:15:37.020 "is_configured": true, 00:15:37.020 "data_offset": 2048, 00:15:37.020 "data_size": 63488 00:15:37.020 } 00:15:37.020 ] 00:15:37.020 } 00:15:37.020 } 00:15:37.020 }' 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:37.020 BaseBdev2 00:15:37.020 BaseBdev3 00:15:37.020 BaseBdev4' 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.020 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.278 [2024-10-13 02:29:55.702917] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:37.278 [2024-10-13 02:29:55.703032] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.278 [2024-10-13 02:29:55.703144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.278 [2024-10-13 02:29:55.703449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.278 [2024-10-13 02:29:55.703518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:15:37.278 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.278 02:29:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 93829 00:15:37.278 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 93829 ']' 00:15:37.278 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 93829 00:15:37.278 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:37.278 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:37.278 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93829 00:15:37.278 killing process with pid 93829 00:15:37.278 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:37.278 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:37.279 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93829' 00:15:37.279 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 93829 00:15:37.279 02:29:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 93829 00:15:37.279 [2024-10-13 02:29:55.743570] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:37.279 [2024-10-13 02:29:55.784945] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:37.538 02:29:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:37.538 ************************************ 00:15:37.538 END TEST raid5f_state_function_test_sb 00:15:37.538 ************************************ 00:15:37.538 00:15:37.538 real 0m9.695s 00:15:37.538 user 0m16.498s 00:15:37.538 sys 0m2.144s 00:15:37.538 02:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:37.538 02:29:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.538 02:29:56 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:37.538 02:29:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:37.538 02:29:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:37.538 02:29:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:37.538 ************************************ 00:15:37.538 START TEST raid5f_superblock_test 00:15:37.538 ************************************ 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94477 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94477 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 94477 ']' 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:37.538 02:29:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.538 [2024-10-13 02:29:56.156758] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:37.538 [2024-10-13 02:29:56.157004] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94477 ] 00:15:37.797 [2024-10-13 02:29:56.282973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.797 [2024-10-13 02:29:56.334201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.797 [2024-10-13 02:29:56.376374] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.797 [2024-10-13 02:29:56.376411] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.365 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:38.365 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:38.365 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:38.365 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:38.365 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:38.365 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:38.365 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:38.365 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:38.365 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:38.365 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:38.365 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:38.365 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.365 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.625 malloc1 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.625 [2024-10-13 02:29:57.058553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:38.625 [2024-10-13 02:29:57.058630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.625 [2024-10-13 02:29:57.058666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:38.625 [2024-10-13 02:29:57.058680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.625 [2024-10-13 02:29:57.060839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.625 [2024-10-13 02:29:57.061002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:38.625 pt1 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.625 malloc2 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.625 [2024-10-13 02:29:57.096360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:38.625 [2024-10-13 02:29:57.096529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.625 [2024-10-13 02:29:57.096552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:38.625 [2024-10-13 02:29:57.096562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.625 [2024-10-13 02:29:57.098643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.625 [2024-10-13 02:29:57.098683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:38.625 pt2 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.625 malloc3 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.625 [2024-10-13 02:29:57.125049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:38.625 [2024-10-13 02:29:57.125122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.625 [2024-10-13 02:29:57.125156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:38.625 [2024-10-13 02:29:57.125166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.625 [2024-10-13 02:29:57.127213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.625 [2024-10-13 02:29:57.127320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:38.625 pt3 00:15:38.625 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.626 malloc4 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.626 [2024-10-13 02:29:57.153735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:38.626 [2024-10-13 02:29:57.153887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.626 [2024-10-13 02:29:57.153924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:38.626 [2024-10-13 02:29:57.153986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.626 [2024-10-13 02:29:57.156155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.626 [2024-10-13 02:29:57.156236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:38.626 pt4 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.626 [2024-10-13 02:29:57.165791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:38.626 [2024-10-13 02:29:57.167850] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:38.626 [2024-10-13 02:29:57.167981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:38.626 [2024-10-13 02:29:57.168052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:38.626 [2024-10-13 02:29:57.168250] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:38.626 [2024-10-13 02:29:57.168305] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:38.626 [2024-10-13 02:29:57.168584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:38.626 [2024-10-13 02:29:57.169111] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:38.626 [2024-10-13 02:29:57.169162] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:38.626 [2024-10-13 02:29:57.169355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.626 "name": "raid_bdev1", 00:15:38.626 "uuid": "8fc10448-30f8-4fa7-8540-5f313f2094ff", 00:15:38.626 "strip_size_kb": 64, 00:15:38.626 "state": "online", 00:15:38.626 "raid_level": "raid5f", 00:15:38.626 "superblock": true, 00:15:38.626 "num_base_bdevs": 4, 00:15:38.626 "num_base_bdevs_discovered": 4, 00:15:38.626 "num_base_bdevs_operational": 4, 00:15:38.626 "base_bdevs_list": [ 00:15:38.626 { 00:15:38.626 "name": "pt1", 00:15:38.626 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:38.626 "is_configured": true, 00:15:38.626 "data_offset": 2048, 00:15:38.626 "data_size": 63488 00:15:38.626 }, 00:15:38.626 { 00:15:38.626 "name": "pt2", 00:15:38.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.626 "is_configured": true, 00:15:38.626 "data_offset": 2048, 00:15:38.626 "data_size": 63488 00:15:38.626 }, 00:15:38.626 { 00:15:38.626 "name": "pt3", 00:15:38.626 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.626 "is_configured": true, 00:15:38.626 "data_offset": 2048, 00:15:38.626 "data_size": 63488 00:15:38.626 }, 00:15:38.626 { 00:15:38.626 "name": "pt4", 00:15:38.626 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:38.626 "is_configured": true, 00:15:38.626 "data_offset": 2048, 00:15:38.626 "data_size": 63488 00:15:38.626 } 00:15:38.626 ] 00:15:38.626 }' 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.626 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.195 [2024-10-13 02:29:57.614611] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:39.195 "name": "raid_bdev1", 00:15:39.195 "aliases": [ 00:15:39.195 "8fc10448-30f8-4fa7-8540-5f313f2094ff" 00:15:39.195 ], 00:15:39.195 "product_name": "Raid Volume", 00:15:39.195 "block_size": 512, 00:15:39.195 "num_blocks": 190464, 00:15:39.195 "uuid": "8fc10448-30f8-4fa7-8540-5f313f2094ff", 00:15:39.195 "assigned_rate_limits": { 00:15:39.195 "rw_ios_per_sec": 0, 00:15:39.195 "rw_mbytes_per_sec": 0, 00:15:39.195 "r_mbytes_per_sec": 0, 00:15:39.195 "w_mbytes_per_sec": 0 00:15:39.195 }, 00:15:39.195 "claimed": false, 00:15:39.195 "zoned": false, 00:15:39.195 "supported_io_types": { 00:15:39.195 "read": true, 00:15:39.195 "write": true, 00:15:39.195 "unmap": false, 00:15:39.195 "flush": false, 00:15:39.195 "reset": true, 00:15:39.195 "nvme_admin": false, 00:15:39.195 "nvme_io": false, 00:15:39.195 "nvme_io_md": false, 00:15:39.195 "write_zeroes": true, 00:15:39.195 "zcopy": false, 00:15:39.195 "get_zone_info": false, 00:15:39.195 "zone_management": false, 00:15:39.195 "zone_append": false, 00:15:39.195 "compare": false, 00:15:39.195 "compare_and_write": false, 00:15:39.195 "abort": false, 00:15:39.195 "seek_hole": false, 00:15:39.195 "seek_data": false, 00:15:39.195 "copy": false, 00:15:39.195 "nvme_iov_md": false 00:15:39.195 }, 00:15:39.195 "driver_specific": { 00:15:39.195 "raid": { 00:15:39.195 "uuid": "8fc10448-30f8-4fa7-8540-5f313f2094ff", 00:15:39.195 "strip_size_kb": 64, 00:15:39.195 "state": "online", 00:15:39.195 "raid_level": "raid5f", 00:15:39.195 "superblock": true, 00:15:39.195 "num_base_bdevs": 4, 00:15:39.195 "num_base_bdevs_discovered": 4, 00:15:39.195 "num_base_bdevs_operational": 4, 00:15:39.195 "base_bdevs_list": [ 00:15:39.195 { 00:15:39.195 "name": "pt1", 00:15:39.195 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:39.195 "is_configured": true, 00:15:39.195 "data_offset": 2048, 00:15:39.195 "data_size": 63488 00:15:39.195 }, 00:15:39.195 { 00:15:39.195 "name": "pt2", 00:15:39.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:39.195 "is_configured": true, 00:15:39.195 "data_offset": 2048, 00:15:39.195 "data_size": 63488 00:15:39.195 }, 00:15:39.195 { 00:15:39.195 "name": "pt3", 00:15:39.195 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:39.195 "is_configured": true, 00:15:39.195 "data_offset": 2048, 00:15:39.195 "data_size": 63488 00:15:39.195 }, 00:15:39.195 { 00:15:39.195 "name": "pt4", 00:15:39.195 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:39.195 "is_configured": true, 00:15:39.195 "data_offset": 2048, 00:15:39.195 "data_size": 63488 00:15:39.195 } 00:15:39.195 ] 00:15:39.195 } 00:15:39.195 } 00:15:39.195 }' 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:39.195 pt2 00:15:39.195 pt3 00:15:39.195 pt4' 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.195 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.455 [2024-10-13 02:29:57.966004] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8fc10448-30f8-4fa7-8540-5f313f2094ff 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8fc10448-30f8-4fa7-8540-5f313f2094ff ']' 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:39.455 02:29:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.455 [2024-10-13 02:29:58.005725] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:39.455 [2024-10-13 02:29:58.005831] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.455 [2024-10-13 02:29:58.005965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.455 [2024-10-13 02:29:58.006075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.455 [2024-10-13 02:29:58.006133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:39.455 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:39.715 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:39.715 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:39.715 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:39.715 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:39.715 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.715 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.715 [2024-10-13 02:29:58.145531] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:39.715 [2024-10-13 02:29:58.147512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:39.715 [2024-10-13 02:29:58.147574] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:39.715 [2024-10-13 02:29:58.147610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:39.715 [2024-10-13 02:29:58.147677] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:39.715 [2024-10-13 02:29:58.147762] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:39.715 [2024-10-13 02:29:58.147784] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:39.715 [2024-10-13 02:29:58.147800] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:39.715 [2024-10-13 02:29:58.147814] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:39.715 [2024-10-13 02:29:58.147825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:39.715 request: 00:15:39.715 { 00:15:39.715 "name": "raid_bdev1", 00:15:39.715 "raid_level": "raid5f", 00:15:39.715 "base_bdevs": [ 00:15:39.715 "malloc1", 00:15:39.715 "malloc2", 00:15:39.715 "malloc3", 00:15:39.715 "malloc4" 00:15:39.715 ], 00:15:39.715 "strip_size_kb": 64, 00:15:39.715 "superblock": false, 00:15:39.715 "method": "bdev_raid_create", 00:15:39.715 "req_id": 1 00:15:39.715 } 00:15:39.715 Got JSON-RPC error response 00:15:39.715 response: 00:15:39.715 { 00:15:39.715 "code": -17, 00:15:39.715 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:39.715 } 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.716 [2024-10-13 02:29:58.205366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:39.716 [2024-10-13 02:29:58.205533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.716 [2024-10-13 02:29:58.205568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:39.716 [2024-10-13 02:29:58.205579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.716 [2024-10-13 02:29:58.207959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.716 [2024-10-13 02:29:58.207997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:39.716 [2024-10-13 02:29:58.208087] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:39.716 [2024-10-13 02:29:58.208132] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:39.716 pt1 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.716 "name": "raid_bdev1", 00:15:39.716 "uuid": "8fc10448-30f8-4fa7-8540-5f313f2094ff", 00:15:39.716 "strip_size_kb": 64, 00:15:39.716 "state": "configuring", 00:15:39.716 "raid_level": "raid5f", 00:15:39.716 "superblock": true, 00:15:39.716 "num_base_bdevs": 4, 00:15:39.716 "num_base_bdevs_discovered": 1, 00:15:39.716 "num_base_bdevs_operational": 4, 00:15:39.716 "base_bdevs_list": [ 00:15:39.716 { 00:15:39.716 "name": "pt1", 00:15:39.716 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:39.716 "is_configured": true, 00:15:39.716 "data_offset": 2048, 00:15:39.716 "data_size": 63488 00:15:39.716 }, 00:15:39.716 { 00:15:39.716 "name": null, 00:15:39.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:39.716 "is_configured": false, 00:15:39.716 "data_offset": 2048, 00:15:39.716 "data_size": 63488 00:15:39.716 }, 00:15:39.716 { 00:15:39.716 "name": null, 00:15:39.716 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:39.716 "is_configured": false, 00:15:39.716 "data_offset": 2048, 00:15:39.716 "data_size": 63488 00:15:39.716 }, 00:15:39.716 { 00:15:39.716 "name": null, 00:15:39.716 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:39.716 "is_configured": false, 00:15:39.716 "data_offset": 2048, 00:15:39.716 "data_size": 63488 00:15:39.716 } 00:15:39.716 ] 00:15:39.716 }' 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.716 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.284 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:40.284 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:40.284 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.284 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.284 [2024-10-13 02:29:58.688545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:40.284 [2024-10-13 02:29:58.688631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.284 [2024-10-13 02:29:58.688655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:40.284 [2024-10-13 02:29:58.688664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.284 [2024-10-13 02:29:58.689093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.284 [2024-10-13 02:29:58.689114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:40.284 [2024-10-13 02:29:58.689198] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:40.284 [2024-10-13 02:29:58.689235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:40.284 pt2 00:15:40.284 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.284 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:40.284 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.284 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.284 [2024-10-13 02:29:58.696537] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:40.284 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.284 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:40.284 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.284 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.284 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.284 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.284 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.284 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.285 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.285 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.285 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.285 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.285 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.285 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.285 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.285 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.285 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.285 "name": "raid_bdev1", 00:15:40.285 "uuid": "8fc10448-30f8-4fa7-8540-5f313f2094ff", 00:15:40.285 "strip_size_kb": 64, 00:15:40.285 "state": "configuring", 00:15:40.285 "raid_level": "raid5f", 00:15:40.285 "superblock": true, 00:15:40.285 "num_base_bdevs": 4, 00:15:40.285 "num_base_bdevs_discovered": 1, 00:15:40.285 "num_base_bdevs_operational": 4, 00:15:40.285 "base_bdevs_list": [ 00:15:40.285 { 00:15:40.285 "name": "pt1", 00:15:40.285 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:40.285 "is_configured": true, 00:15:40.285 "data_offset": 2048, 00:15:40.285 "data_size": 63488 00:15:40.285 }, 00:15:40.285 { 00:15:40.285 "name": null, 00:15:40.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:40.285 "is_configured": false, 00:15:40.285 "data_offset": 0, 00:15:40.285 "data_size": 63488 00:15:40.285 }, 00:15:40.285 { 00:15:40.285 "name": null, 00:15:40.285 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:40.285 "is_configured": false, 00:15:40.285 "data_offset": 2048, 00:15:40.285 "data_size": 63488 00:15:40.285 }, 00:15:40.285 { 00:15:40.285 "name": null, 00:15:40.285 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:40.285 "is_configured": false, 00:15:40.285 "data_offset": 2048, 00:15:40.285 "data_size": 63488 00:15:40.285 } 00:15:40.285 ] 00:15:40.285 }' 00:15:40.285 02:29:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.285 02:29:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.544 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:40.544 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:40.544 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:40.544 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.544 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.544 [2024-10-13 02:29:59.155774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:40.544 [2024-10-13 02:29:59.155866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.544 [2024-10-13 02:29:59.155906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:40.544 [2024-10-13 02:29:59.155916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.545 [2024-10-13 02:29:59.156337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.545 [2024-10-13 02:29:59.156362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:40.545 [2024-10-13 02:29:59.156442] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:40.545 [2024-10-13 02:29:59.156466] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:40.545 pt2 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.545 [2024-10-13 02:29:59.167694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:40.545 [2024-10-13 02:29:59.167784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.545 [2024-10-13 02:29:59.167805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:40.545 [2024-10-13 02:29:59.167825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.545 [2024-10-13 02:29:59.168241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.545 [2024-10-13 02:29:59.168260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:40.545 [2024-10-13 02:29:59.168334] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:40.545 [2024-10-13 02:29:59.168356] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:40.545 pt3 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.545 [2024-10-13 02:29:59.179709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:40.545 [2024-10-13 02:29:59.179852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.545 [2024-10-13 02:29:59.179885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:40.545 [2024-10-13 02:29:59.179896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.545 [2024-10-13 02:29:59.180241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.545 [2024-10-13 02:29:59.180260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:40.545 [2024-10-13 02:29:59.180326] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:40.545 [2024-10-13 02:29:59.180347] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:40.545 [2024-10-13 02:29:59.180454] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:40.545 [2024-10-13 02:29:59.180466] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:40.545 [2024-10-13 02:29:59.180699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:40.545 [2024-10-13 02:29:59.181144] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:40.545 [2024-10-13 02:29:59.181160] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:40.545 [2024-10-13 02:29:59.181268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.545 pt4 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.545 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.804 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.804 "name": "raid_bdev1", 00:15:40.804 "uuid": "8fc10448-30f8-4fa7-8540-5f313f2094ff", 00:15:40.804 "strip_size_kb": 64, 00:15:40.804 "state": "online", 00:15:40.804 "raid_level": "raid5f", 00:15:40.804 "superblock": true, 00:15:40.804 "num_base_bdevs": 4, 00:15:40.804 "num_base_bdevs_discovered": 4, 00:15:40.804 "num_base_bdevs_operational": 4, 00:15:40.804 "base_bdevs_list": [ 00:15:40.804 { 00:15:40.804 "name": "pt1", 00:15:40.804 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:40.804 "is_configured": true, 00:15:40.804 "data_offset": 2048, 00:15:40.804 "data_size": 63488 00:15:40.804 }, 00:15:40.804 { 00:15:40.804 "name": "pt2", 00:15:40.804 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:40.804 "is_configured": true, 00:15:40.804 "data_offset": 2048, 00:15:40.804 "data_size": 63488 00:15:40.804 }, 00:15:40.804 { 00:15:40.804 "name": "pt3", 00:15:40.804 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:40.804 "is_configured": true, 00:15:40.804 "data_offset": 2048, 00:15:40.804 "data_size": 63488 00:15:40.804 }, 00:15:40.804 { 00:15:40.804 "name": "pt4", 00:15:40.804 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:40.804 "is_configured": true, 00:15:40.804 "data_offset": 2048, 00:15:40.804 "data_size": 63488 00:15:40.804 } 00:15:40.804 ] 00:15:40.804 }' 00:15:40.804 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.804 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.063 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:41.063 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:41.063 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:41.063 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:41.063 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:41.063 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:41.063 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:41.063 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.063 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.063 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:41.063 [2024-10-13 02:29:59.639153] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.063 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.063 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:41.063 "name": "raid_bdev1", 00:15:41.063 "aliases": [ 00:15:41.063 "8fc10448-30f8-4fa7-8540-5f313f2094ff" 00:15:41.063 ], 00:15:41.063 "product_name": "Raid Volume", 00:15:41.063 "block_size": 512, 00:15:41.063 "num_blocks": 190464, 00:15:41.063 "uuid": "8fc10448-30f8-4fa7-8540-5f313f2094ff", 00:15:41.063 "assigned_rate_limits": { 00:15:41.063 "rw_ios_per_sec": 0, 00:15:41.063 "rw_mbytes_per_sec": 0, 00:15:41.063 "r_mbytes_per_sec": 0, 00:15:41.063 "w_mbytes_per_sec": 0 00:15:41.063 }, 00:15:41.063 "claimed": false, 00:15:41.063 "zoned": false, 00:15:41.063 "supported_io_types": { 00:15:41.063 "read": true, 00:15:41.063 "write": true, 00:15:41.063 "unmap": false, 00:15:41.063 "flush": false, 00:15:41.063 "reset": true, 00:15:41.063 "nvme_admin": false, 00:15:41.063 "nvme_io": false, 00:15:41.063 "nvme_io_md": false, 00:15:41.063 "write_zeroes": true, 00:15:41.063 "zcopy": false, 00:15:41.063 "get_zone_info": false, 00:15:41.063 "zone_management": false, 00:15:41.063 "zone_append": false, 00:15:41.063 "compare": false, 00:15:41.063 "compare_and_write": false, 00:15:41.063 "abort": false, 00:15:41.063 "seek_hole": false, 00:15:41.063 "seek_data": false, 00:15:41.063 "copy": false, 00:15:41.063 "nvme_iov_md": false 00:15:41.063 }, 00:15:41.063 "driver_specific": { 00:15:41.063 "raid": { 00:15:41.063 "uuid": "8fc10448-30f8-4fa7-8540-5f313f2094ff", 00:15:41.063 "strip_size_kb": 64, 00:15:41.063 "state": "online", 00:15:41.063 "raid_level": "raid5f", 00:15:41.063 "superblock": true, 00:15:41.063 "num_base_bdevs": 4, 00:15:41.063 "num_base_bdevs_discovered": 4, 00:15:41.063 "num_base_bdevs_operational": 4, 00:15:41.063 "base_bdevs_list": [ 00:15:41.063 { 00:15:41.063 "name": "pt1", 00:15:41.063 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:41.063 "is_configured": true, 00:15:41.063 "data_offset": 2048, 00:15:41.063 "data_size": 63488 00:15:41.063 }, 00:15:41.063 { 00:15:41.063 "name": "pt2", 00:15:41.063 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:41.063 "is_configured": true, 00:15:41.063 "data_offset": 2048, 00:15:41.063 "data_size": 63488 00:15:41.063 }, 00:15:41.063 { 00:15:41.063 "name": "pt3", 00:15:41.063 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:41.063 "is_configured": true, 00:15:41.063 "data_offset": 2048, 00:15:41.063 "data_size": 63488 00:15:41.063 }, 00:15:41.063 { 00:15:41.063 "name": "pt4", 00:15:41.063 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:41.063 "is_configured": true, 00:15:41.063 "data_offset": 2048, 00:15:41.063 "data_size": 63488 00:15:41.063 } 00:15:41.063 ] 00:15:41.063 } 00:15:41.063 } 00:15:41.063 }' 00:15:41.063 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:41.063 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:41.063 pt2 00:15:41.063 pt3 00:15:41.063 pt4' 00:15:41.063 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.323 [2024-10-13 02:29:59.942604] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8fc10448-30f8-4fa7-8540-5f313f2094ff '!=' 8fc10448-30f8-4fa7-8540-5f313f2094ff ']' 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.323 [2024-10-13 02:29:59.990382] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.323 02:29:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.582 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.582 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.582 "name": "raid_bdev1", 00:15:41.582 "uuid": "8fc10448-30f8-4fa7-8540-5f313f2094ff", 00:15:41.582 "strip_size_kb": 64, 00:15:41.582 "state": "online", 00:15:41.582 "raid_level": "raid5f", 00:15:41.582 "superblock": true, 00:15:41.582 "num_base_bdevs": 4, 00:15:41.582 "num_base_bdevs_discovered": 3, 00:15:41.582 "num_base_bdevs_operational": 3, 00:15:41.582 "base_bdevs_list": [ 00:15:41.582 { 00:15:41.582 "name": null, 00:15:41.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.582 "is_configured": false, 00:15:41.582 "data_offset": 0, 00:15:41.582 "data_size": 63488 00:15:41.582 }, 00:15:41.582 { 00:15:41.582 "name": "pt2", 00:15:41.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:41.582 "is_configured": true, 00:15:41.582 "data_offset": 2048, 00:15:41.582 "data_size": 63488 00:15:41.582 }, 00:15:41.582 { 00:15:41.582 "name": "pt3", 00:15:41.582 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:41.582 "is_configured": true, 00:15:41.582 "data_offset": 2048, 00:15:41.582 "data_size": 63488 00:15:41.582 }, 00:15:41.582 { 00:15:41.582 "name": "pt4", 00:15:41.582 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:41.582 "is_configured": true, 00:15:41.582 "data_offset": 2048, 00:15:41.582 "data_size": 63488 00:15:41.582 } 00:15:41.582 ] 00:15:41.582 }' 00:15:41.582 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.582 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.842 [2024-10-13 02:30:00.429641] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:41.842 [2024-10-13 02:30:00.429681] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.842 [2024-10-13 02:30:00.429770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.842 [2024-10-13 02:30:00.429842] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.842 [2024-10-13 02:30:00.429853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.842 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.101 [2024-10-13 02:30:00.529449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:42.101 [2024-10-13 02:30:00.529530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.101 [2024-10-13 02:30:00.529548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:42.101 [2024-10-13 02:30:00.529585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.101 [2024-10-13 02:30:00.532002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.101 [2024-10-13 02:30:00.532048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:42.101 [2024-10-13 02:30:00.532130] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:42.101 [2024-10-13 02:30:00.532173] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:42.101 pt2 00:15:42.101 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.101 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:42.101 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.101 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.101 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.101 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.101 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.101 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.101 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.101 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.101 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.101 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.101 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.101 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.101 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.101 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.101 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.101 "name": "raid_bdev1", 00:15:42.101 "uuid": "8fc10448-30f8-4fa7-8540-5f313f2094ff", 00:15:42.101 "strip_size_kb": 64, 00:15:42.101 "state": "configuring", 00:15:42.102 "raid_level": "raid5f", 00:15:42.102 "superblock": true, 00:15:42.102 "num_base_bdevs": 4, 00:15:42.102 "num_base_bdevs_discovered": 1, 00:15:42.102 "num_base_bdevs_operational": 3, 00:15:42.102 "base_bdevs_list": [ 00:15:42.102 { 00:15:42.102 "name": null, 00:15:42.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.102 "is_configured": false, 00:15:42.102 "data_offset": 2048, 00:15:42.102 "data_size": 63488 00:15:42.102 }, 00:15:42.102 { 00:15:42.102 "name": "pt2", 00:15:42.102 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:42.102 "is_configured": true, 00:15:42.102 "data_offset": 2048, 00:15:42.102 "data_size": 63488 00:15:42.102 }, 00:15:42.102 { 00:15:42.102 "name": null, 00:15:42.102 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:42.102 "is_configured": false, 00:15:42.102 "data_offset": 2048, 00:15:42.102 "data_size": 63488 00:15:42.102 }, 00:15:42.102 { 00:15:42.102 "name": null, 00:15:42.102 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:42.102 "is_configured": false, 00:15:42.102 "data_offset": 2048, 00:15:42.102 "data_size": 63488 00:15:42.102 } 00:15:42.102 ] 00:15:42.102 }' 00:15:42.102 02:30:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.102 02:30:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.360 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:42.360 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:42.360 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:42.360 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.360 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.360 [2024-10-13 02:30:01.020652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:42.360 [2024-10-13 02:30:01.020833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.360 [2024-10-13 02:30:01.020884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:42.360 [2024-10-13 02:30:01.020922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.361 [2024-10-13 02:30:01.021362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.361 [2024-10-13 02:30:01.021431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:42.361 [2024-10-13 02:30:01.021543] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:42.361 [2024-10-13 02:30:01.021597] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:42.361 pt3 00:15:42.361 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.361 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:42.361 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.361 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.361 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.361 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.361 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.361 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.361 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.361 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.361 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.361 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.361 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.361 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.361 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.620 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.620 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.620 "name": "raid_bdev1", 00:15:42.620 "uuid": "8fc10448-30f8-4fa7-8540-5f313f2094ff", 00:15:42.620 "strip_size_kb": 64, 00:15:42.620 "state": "configuring", 00:15:42.620 "raid_level": "raid5f", 00:15:42.620 "superblock": true, 00:15:42.620 "num_base_bdevs": 4, 00:15:42.620 "num_base_bdevs_discovered": 2, 00:15:42.620 "num_base_bdevs_operational": 3, 00:15:42.620 "base_bdevs_list": [ 00:15:42.620 { 00:15:42.620 "name": null, 00:15:42.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.620 "is_configured": false, 00:15:42.620 "data_offset": 2048, 00:15:42.620 "data_size": 63488 00:15:42.620 }, 00:15:42.620 { 00:15:42.620 "name": "pt2", 00:15:42.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:42.620 "is_configured": true, 00:15:42.620 "data_offset": 2048, 00:15:42.620 "data_size": 63488 00:15:42.620 }, 00:15:42.620 { 00:15:42.620 "name": "pt3", 00:15:42.620 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:42.620 "is_configured": true, 00:15:42.620 "data_offset": 2048, 00:15:42.620 "data_size": 63488 00:15:42.620 }, 00:15:42.620 { 00:15:42.620 "name": null, 00:15:42.620 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:42.620 "is_configured": false, 00:15:42.620 "data_offset": 2048, 00:15:42.620 "data_size": 63488 00:15:42.620 } 00:15:42.620 ] 00:15:42.620 }' 00:15:42.620 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.620 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.879 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:42.879 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:42.879 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:42.879 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:42.879 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.879 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.879 [2024-10-13 02:30:01.471828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:42.879 [2024-10-13 02:30:01.472020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.879 [2024-10-13 02:30:01.472060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:42.879 [2024-10-13 02:30:01.472090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.879 [2024-10-13 02:30:01.472537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.879 [2024-10-13 02:30:01.472605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:42.880 [2024-10-13 02:30:01.472718] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:42.880 [2024-10-13 02:30:01.472773] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:42.880 [2024-10-13 02:30:01.472912] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:42.880 [2024-10-13 02:30:01.472954] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:42.880 [2024-10-13 02:30:01.473211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:15:42.880 [2024-10-13 02:30:01.473768] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:42.880 [2024-10-13 02:30:01.473817] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:42.880 [2024-10-13 02:30:01.474066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.880 pt4 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.880 "name": "raid_bdev1", 00:15:42.880 "uuid": "8fc10448-30f8-4fa7-8540-5f313f2094ff", 00:15:42.880 "strip_size_kb": 64, 00:15:42.880 "state": "online", 00:15:42.880 "raid_level": "raid5f", 00:15:42.880 "superblock": true, 00:15:42.880 "num_base_bdevs": 4, 00:15:42.880 "num_base_bdevs_discovered": 3, 00:15:42.880 "num_base_bdevs_operational": 3, 00:15:42.880 "base_bdevs_list": [ 00:15:42.880 { 00:15:42.880 "name": null, 00:15:42.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.880 "is_configured": false, 00:15:42.880 "data_offset": 2048, 00:15:42.880 "data_size": 63488 00:15:42.880 }, 00:15:42.880 { 00:15:42.880 "name": "pt2", 00:15:42.880 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:42.880 "is_configured": true, 00:15:42.880 "data_offset": 2048, 00:15:42.880 "data_size": 63488 00:15:42.880 }, 00:15:42.880 { 00:15:42.880 "name": "pt3", 00:15:42.880 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:42.880 "is_configured": true, 00:15:42.880 "data_offset": 2048, 00:15:42.880 "data_size": 63488 00:15:42.880 }, 00:15:42.880 { 00:15:42.880 "name": "pt4", 00:15:42.880 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:42.880 "is_configured": true, 00:15:42.880 "data_offset": 2048, 00:15:42.880 "data_size": 63488 00:15:42.880 } 00:15:42.880 ] 00:15:42.880 }' 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.880 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.449 [2024-10-13 02:30:01.927684] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.449 [2024-10-13 02:30:01.927812] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.449 [2024-10-13 02:30:01.927929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.449 [2024-10-13 02:30:01.928045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.449 [2024-10-13 02:30:01.928159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.449 02:30:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.449 [2024-10-13 02:30:02.003545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:43.449 [2024-10-13 02:30:02.003648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.449 [2024-10-13 02:30:02.003671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:43.449 [2024-10-13 02:30:02.003680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.449 [2024-10-13 02:30:02.005995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.449 [2024-10-13 02:30:02.006036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:43.449 [2024-10-13 02:30:02.006122] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:43.449 [2024-10-13 02:30:02.006163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:43.449 [2024-10-13 02:30:02.006275] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:43.449 [2024-10-13 02:30:02.006288] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.449 [2024-10-13 02:30:02.006310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:15:43.449 [2024-10-13 02:30:02.006339] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:43.449 [2024-10-13 02:30:02.006438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:43.449 pt1 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.449 "name": "raid_bdev1", 00:15:43.449 "uuid": "8fc10448-30f8-4fa7-8540-5f313f2094ff", 00:15:43.449 "strip_size_kb": 64, 00:15:43.449 "state": "configuring", 00:15:43.449 "raid_level": "raid5f", 00:15:43.449 "superblock": true, 00:15:43.449 "num_base_bdevs": 4, 00:15:43.449 "num_base_bdevs_discovered": 2, 00:15:43.449 "num_base_bdevs_operational": 3, 00:15:43.449 "base_bdevs_list": [ 00:15:43.449 { 00:15:43.449 "name": null, 00:15:43.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.449 "is_configured": false, 00:15:43.449 "data_offset": 2048, 00:15:43.449 "data_size": 63488 00:15:43.449 }, 00:15:43.449 { 00:15:43.449 "name": "pt2", 00:15:43.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:43.449 "is_configured": true, 00:15:43.449 "data_offset": 2048, 00:15:43.449 "data_size": 63488 00:15:43.449 }, 00:15:43.449 { 00:15:43.449 "name": "pt3", 00:15:43.449 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:43.449 "is_configured": true, 00:15:43.449 "data_offset": 2048, 00:15:43.449 "data_size": 63488 00:15:43.449 }, 00:15:43.449 { 00:15:43.449 "name": null, 00:15:43.449 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:43.449 "is_configured": false, 00:15:43.449 "data_offset": 2048, 00:15:43.449 "data_size": 63488 00:15:43.449 } 00:15:43.449 ] 00:15:43.449 }' 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.449 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.016 [2024-10-13 02:30:02.498747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:44.016 [2024-10-13 02:30:02.498848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.016 [2024-10-13 02:30:02.498894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:44.016 [2024-10-13 02:30:02.498908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.016 [2024-10-13 02:30:02.499359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.016 [2024-10-13 02:30:02.499391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:44.016 [2024-10-13 02:30:02.499476] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:44.016 [2024-10-13 02:30:02.499513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:44.016 [2024-10-13 02:30:02.499643] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:44.016 [2024-10-13 02:30:02.499658] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:44.016 [2024-10-13 02:30:02.499939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:15:44.016 [2024-10-13 02:30:02.500550] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:44.016 [2024-10-13 02:30:02.500623] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:44.016 [2024-10-13 02:30:02.500842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.016 pt4 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.016 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.016 "name": "raid_bdev1", 00:15:44.016 "uuid": "8fc10448-30f8-4fa7-8540-5f313f2094ff", 00:15:44.016 "strip_size_kb": 64, 00:15:44.016 "state": "online", 00:15:44.016 "raid_level": "raid5f", 00:15:44.016 "superblock": true, 00:15:44.016 "num_base_bdevs": 4, 00:15:44.016 "num_base_bdevs_discovered": 3, 00:15:44.016 "num_base_bdevs_operational": 3, 00:15:44.016 "base_bdevs_list": [ 00:15:44.016 { 00:15:44.016 "name": null, 00:15:44.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.016 "is_configured": false, 00:15:44.016 "data_offset": 2048, 00:15:44.016 "data_size": 63488 00:15:44.016 }, 00:15:44.016 { 00:15:44.016 "name": "pt2", 00:15:44.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:44.016 "is_configured": true, 00:15:44.016 "data_offset": 2048, 00:15:44.016 "data_size": 63488 00:15:44.016 }, 00:15:44.016 { 00:15:44.016 "name": "pt3", 00:15:44.016 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:44.016 "is_configured": true, 00:15:44.016 "data_offset": 2048, 00:15:44.016 "data_size": 63488 00:15:44.016 }, 00:15:44.016 { 00:15:44.016 "name": "pt4", 00:15:44.016 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:44.016 "is_configured": true, 00:15:44.016 "data_offset": 2048, 00:15:44.016 "data_size": 63488 00:15:44.017 } 00:15:44.017 ] 00:15:44.017 }' 00:15:44.017 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.017 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.584 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:44.584 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.584 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.584 02:30:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:44.584 02:30:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.584 02:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:44.584 02:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:44.584 02:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:44.584 02:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.584 02:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.584 [2024-10-13 02:30:03.018144] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.584 02:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.584 02:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8fc10448-30f8-4fa7-8540-5f313f2094ff '!=' 8fc10448-30f8-4fa7-8540-5f313f2094ff ']' 00:15:44.584 02:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94477 00:15:44.584 02:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 94477 ']' 00:15:44.584 02:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 94477 00:15:44.584 02:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:44.584 02:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:44.584 02:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94477 00:15:44.584 02:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:44.584 02:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:44.584 02:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94477' 00:15:44.584 killing process with pid 94477 00:15:44.584 02:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 94477 00:15:44.584 [2024-10-13 02:30:03.096237] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:44.584 [2024-10-13 02:30:03.096358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.584 02:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 94477 00:15:44.584 [2024-10-13 02:30:03.096448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.584 [2024-10-13 02:30:03.096459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:15:44.584 [2024-10-13 02:30:03.140675] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:44.844 02:30:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:44.844 00:15:44.844 real 0m7.282s 00:15:44.844 user 0m12.229s 00:15:44.844 sys 0m1.614s 00:15:44.844 ************************************ 00:15:44.844 END TEST raid5f_superblock_test 00:15:44.844 ************************************ 00:15:44.844 02:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:44.844 02:30:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.844 02:30:03 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:44.844 02:30:03 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:44.844 02:30:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:44.844 02:30:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:44.844 02:30:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:44.844 ************************************ 00:15:44.844 START TEST raid5f_rebuild_test 00:15:44.844 ************************************ 00:15:44.844 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:15:44.844 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:44.844 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:44.844 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:44.844 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:44.844 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:44.844 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:44.844 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.844 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:44.844 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:44.844 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=94956 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 94956 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 94956 ']' 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:44.845 02:30:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.104 [2024-10-13 02:30:03.536902] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:45.104 [2024-10-13 02:30:03.537121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94956 ] 00:15:45.104 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:45.104 Zero copy mechanism will not be used. 00:15:45.104 [2024-10-13 02:30:03.678788] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.104 [2024-10-13 02:30:03.731348] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.104 [2024-10-13 02:30:03.773831] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.104 [2024-10-13 02:30:03.773983] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.041 BaseBdev1_malloc 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.041 [2024-10-13 02:30:04.460246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:46.041 [2024-10-13 02:30:04.460319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.041 [2024-10-13 02:30:04.460355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:46.041 [2024-10-13 02:30:04.460370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.041 [2024-10-13 02:30:04.462529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.041 [2024-10-13 02:30:04.462570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:46.041 BaseBdev1 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.041 BaseBdev2_malloc 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.041 [2024-10-13 02:30:04.499389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:46.041 [2024-10-13 02:30:04.499555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.041 [2024-10-13 02:30:04.499586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:46.041 [2024-10-13 02:30:04.499598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.041 [2024-10-13 02:30:04.502212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.041 [2024-10-13 02:30:04.502259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:46.041 BaseBdev2 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.041 BaseBdev3_malloc 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.041 [2024-10-13 02:30:04.528370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:46.041 [2024-10-13 02:30:04.528444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.041 [2024-10-13 02:30:04.528473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:46.041 [2024-10-13 02:30:04.528482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.041 [2024-10-13 02:30:04.530636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.041 [2024-10-13 02:30:04.530680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:46.041 BaseBdev3 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.041 BaseBdev4_malloc 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.041 [2024-10-13 02:30:04.557134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:46.041 [2024-10-13 02:30:04.557284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.041 [2024-10-13 02:30:04.557311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:46.041 [2024-10-13 02:30:04.557320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.041 [2024-10-13 02:30:04.559410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.041 [2024-10-13 02:30:04.559452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:46.041 BaseBdev4 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.041 spare_malloc 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.041 spare_delay 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.041 [2024-10-13 02:30:04.597920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:46.041 [2024-10-13 02:30:04.597994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.041 [2024-10-13 02:30:04.598018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:46.041 [2024-10-13 02:30:04.598027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.041 [2024-10-13 02:30:04.600269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.041 [2024-10-13 02:30:04.600369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:46.041 spare 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.041 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.041 [2024-10-13 02:30:04.609979] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.041 [2024-10-13 02:30:04.611807] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:46.041 [2024-10-13 02:30:04.611878] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:46.041 [2024-10-13 02:30:04.611931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:46.041 [2024-10-13 02:30:04.612025] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:46.041 [2024-10-13 02:30:04.612040] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:46.041 [2024-10-13 02:30:04.612351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:46.042 [2024-10-13 02:30:04.612823] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:46.042 [2024-10-13 02:30:04.612837] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:46.042 [2024-10-13 02:30:04.613015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.042 "name": "raid_bdev1", 00:15:46.042 "uuid": "0373a0e9-440e-437a-9489-80381dbda0b7", 00:15:46.042 "strip_size_kb": 64, 00:15:46.042 "state": "online", 00:15:46.042 "raid_level": "raid5f", 00:15:46.042 "superblock": false, 00:15:46.042 "num_base_bdevs": 4, 00:15:46.042 "num_base_bdevs_discovered": 4, 00:15:46.042 "num_base_bdevs_operational": 4, 00:15:46.042 "base_bdevs_list": [ 00:15:46.042 { 00:15:46.042 "name": "BaseBdev1", 00:15:46.042 "uuid": "22fa219a-7757-5f75-8c50-02072fb91f7b", 00:15:46.042 "is_configured": true, 00:15:46.042 "data_offset": 0, 00:15:46.042 "data_size": 65536 00:15:46.042 }, 00:15:46.042 { 00:15:46.042 "name": "BaseBdev2", 00:15:46.042 "uuid": "b97331e7-263e-5d77-ab86-70a807d05ffe", 00:15:46.042 "is_configured": true, 00:15:46.042 "data_offset": 0, 00:15:46.042 "data_size": 65536 00:15:46.042 }, 00:15:46.042 { 00:15:46.042 "name": "BaseBdev3", 00:15:46.042 "uuid": "afcda745-2f43-5c67-a416-5e63b2bdc99c", 00:15:46.042 "is_configured": true, 00:15:46.042 "data_offset": 0, 00:15:46.042 "data_size": 65536 00:15:46.042 }, 00:15:46.042 { 00:15:46.042 "name": "BaseBdev4", 00:15:46.042 "uuid": "1fb9db3e-55a0-59cd-948a-e8f98d2db962", 00:15:46.042 "is_configured": true, 00:15:46.042 "data_offset": 0, 00:15:46.042 "data_size": 65536 00:15:46.042 } 00:15:46.042 ] 00:15:46.042 }' 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.042 02:30:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.610 [2024-10-13 02:30:05.070208] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:46.610 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:46.869 [2024-10-13 02:30:05.337600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:15:46.869 /dev/nbd0 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.869 1+0 records in 00:15:46.869 1+0 records out 00:15:46.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252554 s, 16.2 MB/s 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:46.869 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:15:47.128 512+0 records in 00:15:47.128 512+0 records out 00:15:47.128 100663296 bytes (101 MB, 96 MiB) copied, 0.397631 s, 253 MB/s 00:15:47.128 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:47.387 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:47.387 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:47.387 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:47.387 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:47.387 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.387 02:30:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:47.387 [2024-10-13 02:30:06.021457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.387 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:47.387 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:47.387 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:47.387 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.387 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.387 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:47.387 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:47.387 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.387 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:47.387 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.387 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.387 [2024-10-13 02:30:06.063142] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.646 "name": "raid_bdev1", 00:15:47.646 "uuid": "0373a0e9-440e-437a-9489-80381dbda0b7", 00:15:47.646 "strip_size_kb": 64, 00:15:47.646 "state": "online", 00:15:47.646 "raid_level": "raid5f", 00:15:47.646 "superblock": false, 00:15:47.646 "num_base_bdevs": 4, 00:15:47.646 "num_base_bdevs_discovered": 3, 00:15:47.646 "num_base_bdevs_operational": 3, 00:15:47.646 "base_bdevs_list": [ 00:15:47.646 { 00:15:47.646 "name": null, 00:15:47.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.646 "is_configured": false, 00:15:47.646 "data_offset": 0, 00:15:47.646 "data_size": 65536 00:15:47.646 }, 00:15:47.646 { 00:15:47.646 "name": "BaseBdev2", 00:15:47.646 "uuid": "b97331e7-263e-5d77-ab86-70a807d05ffe", 00:15:47.646 "is_configured": true, 00:15:47.646 "data_offset": 0, 00:15:47.646 "data_size": 65536 00:15:47.646 }, 00:15:47.646 { 00:15:47.646 "name": "BaseBdev3", 00:15:47.646 "uuid": "afcda745-2f43-5c67-a416-5e63b2bdc99c", 00:15:47.646 "is_configured": true, 00:15:47.646 "data_offset": 0, 00:15:47.646 "data_size": 65536 00:15:47.646 }, 00:15:47.646 { 00:15:47.646 "name": "BaseBdev4", 00:15:47.646 "uuid": "1fb9db3e-55a0-59cd-948a-e8f98d2db962", 00:15:47.646 "is_configured": true, 00:15:47.646 "data_offset": 0, 00:15:47.646 "data_size": 65536 00:15:47.646 } 00:15:47.646 ] 00:15:47.646 }' 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.646 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.907 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:47.907 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.907 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.907 [2024-10-13 02:30:06.498418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:47.907 [2024-10-13 02:30:06.501947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:15:47.907 [2024-10-13 02:30:06.504236] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:47.907 02:30:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.907 02:30:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:48.844 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.844 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.844 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.844 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.844 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.844 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.844 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.844 02:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.844 02:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.103 02:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.103 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.103 "name": "raid_bdev1", 00:15:49.103 "uuid": "0373a0e9-440e-437a-9489-80381dbda0b7", 00:15:49.103 "strip_size_kb": 64, 00:15:49.103 "state": "online", 00:15:49.103 "raid_level": "raid5f", 00:15:49.103 "superblock": false, 00:15:49.103 "num_base_bdevs": 4, 00:15:49.103 "num_base_bdevs_discovered": 4, 00:15:49.103 "num_base_bdevs_operational": 4, 00:15:49.103 "process": { 00:15:49.103 "type": "rebuild", 00:15:49.103 "target": "spare", 00:15:49.103 "progress": { 00:15:49.103 "blocks": 19200, 00:15:49.103 "percent": 9 00:15:49.103 } 00:15:49.103 }, 00:15:49.103 "base_bdevs_list": [ 00:15:49.103 { 00:15:49.103 "name": "spare", 00:15:49.103 "uuid": "00a00844-7e7e-5d02-8160-b4508b7f1aa7", 00:15:49.103 "is_configured": true, 00:15:49.103 "data_offset": 0, 00:15:49.103 "data_size": 65536 00:15:49.103 }, 00:15:49.103 { 00:15:49.103 "name": "BaseBdev2", 00:15:49.103 "uuid": "b97331e7-263e-5d77-ab86-70a807d05ffe", 00:15:49.103 "is_configured": true, 00:15:49.103 "data_offset": 0, 00:15:49.103 "data_size": 65536 00:15:49.103 }, 00:15:49.103 { 00:15:49.103 "name": "BaseBdev3", 00:15:49.103 "uuid": "afcda745-2f43-5c67-a416-5e63b2bdc99c", 00:15:49.103 "is_configured": true, 00:15:49.103 "data_offset": 0, 00:15:49.103 "data_size": 65536 00:15:49.103 }, 00:15:49.103 { 00:15:49.103 "name": "BaseBdev4", 00:15:49.103 "uuid": "1fb9db3e-55a0-59cd-948a-e8f98d2db962", 00:15:49.103 "is_configured": true, 00:15:49.103 "data_offset": 0, 00:15:49.103 "data_size": 65536 00:15:49.103 } 00:15:49.103 ] 00:15:49.103 }' 00:15:49.103 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.104 [2024-10-13 02:30:07.647460] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:49.104 [2024-10-13 02:30:07.712328] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:49.104 [2024-10-13 02:30:07.712419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.104 [2024-10-13 02:30:07.712438] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:49.104 [2024-10-13 02:30:07.712446] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.104 "name": "raid_bdev1", 00:15:49.104 "uuid": "0373a0e9-440e-437a-9489-80381dbda0b7", 00:15:49.104 "strip_size_kb": 64, 00:15:49.104 "state": "online", 00:15:49.104 "raid_level": "raid5f", 00:15:49.104 "superblock": false, 00:15:49.104 "num_base_bdevs": 4, 00:15:49.104 "num_base_bdevs_discovered": 3, 00:15:49.104 "num_base_bdevs_operational": 3, 00:15:49.104 "base_bdevs_list": [ 00:15:49.104 { 00:15:49.104 "name": null, 00:15:49.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.104 "is_configured": false, 00:15:49.104 "data_offset": 0, 00:15:49.104 "data_size": 65536 00:15:49.104 }, 00:15:49.104 { 00:15:49.104 "name": "BaseBdev2", 00:15:49.104 "uuid": "b97331e7-263e-5d77-ab86-70a807d05ffe", 00:15:49.104 "is_configured": true, 00:15:49.104 "data_offset": 0, 00:15:49.104 "data_size": 65536 00:15:49.104 }, 00:15:49.104 { 00:15:49.104 "name": "BaseBdev3", 00:15:49.104 "uuid": "afcda745-2f43-5c67-a416-5e63b2bdc99c", 00:15:49.104 "is_configured": true, 00:15:49.104 "data_offset": 0, 00:15:49.104 "data_size": 65536 00:15:49.104 }, 00:15:49.104 { 00:15:49.104 "name": "BaseBdev4", 00:15:49.104 "uuid": "1fb9db3e-55a0-59cd-948a-e8f98d2db962", 00:15:49.104 "is_configured": true, 00:15:49.104 "data_offset": 0, 00:15:49.104 "data_size": 65536 00:15:49.104 } 00:15:49.104 ] 00:15:49.104 }' 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.104 02:30:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.672 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:49.672 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.672 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:49.672 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:49.672 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.672 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.672 02:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.672 02:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.672 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.672 02:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.672 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.672 "name": "raid_bdev1", 00:15:49.672 "uuid": "0373a0e9-440e-437a-9489-80381dbda0b7", 00:15:49.672 "strip_size_kb": 64, 00:15:49.672 "state": "online", 00:15:49.672 "raid_level": "raid5f", 00:15:49.672 "superblock": false, 00:15:49.672 "num_base_bdevs": 4, 00:15:49.672 "num_base_bdevs_discovered": 3, 00:15:49.672 "num_base_bdevs_operational": 3, 00:15:49.673 "base_bdevs_list": [ 00:15:49.673 { 00:15:49.673 "name": null, 00:15:49.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.673 "is_configured": false, 00:15:49.673 "data_offset": 0, 00:15:49.673 "data_size": 65536 00:15:49.673 }, 00:15:49.673 { 00:15:49.673 "name": "BaseBdev2", 00:15:49.673 "uuid": "b97331e7-263e-5d77-ab86-70a807d05ffe", 00:15:49.673 "is_configured": true, 00:15:49.673 "data_offset": 0, 00:15:49.673 "data_size": 65536 00:15:49.673 }, 00:15:49.673 { 00:15:49.673 "name": "BaseBdev3", 00:15:49.673 "uuid": "afcda745-2f43-5c67-a416-5e63b2bdc99c", 00:15:49.673 "is_configured": true, 00:15:49.673 "data_offset": 0, 00:15:49.673 "data_size": 65536 00:15:49.673 }, 00:15:49.673 { 00:15:49.673 "name": "BaseBdev4", 00:15:49.673 "uuid": "1fb9db3e-55a0-59cd-948a-e8f98d2db962", 00:15:49.673 "is_configured": true, 00:15:49.673 "data_offset": 0, 00:15:49.673 "data_size": 65536 00:15:49.673 } 00:15:49.673 ] 00:15:49.673 }' 00:15:49.673 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.673 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:49.673 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.673 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:49.673 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:49.673 02:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.673 02:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.673 [2024-10-13 02:30:08.265081] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:49.673 [2024-10-13 02:30:08.268539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027e70 00:15:49.673 [2024-10-13 02:30:08.270752] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:49.673 02:30:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.673 02:30:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:50.609 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.609 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.609 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.609 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.609 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.609 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.609 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.609 02:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.609 02:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.868 02:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.868 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.868 "name": "raid_bdev1", 00:15:50.868 "uuid": "0373a0e9-440e-437a-9489-80381dbda0b7", 00:15:50.868 "strip_size_kb": 64, 00:15:50.868 "state": "online", 00:15:50.868 "raid_level": "raid5f", 00:15:50.868 "superblock": false, 00:15:50.868 "num_base_bdevs": 4, 00:15:50.868 "num_base_bdevs_discovered": 4, 00:15:50.868 "num_base_bdevs_operational": 4, 00:15:50.868 "process": { 00:15:50.868 "type": "rebuild", 00:15:50.868 "target": "spare", 00:15:50.868 "progress": { 00:15:50.868 "blocks": 19200, 00:15:50.868 "percent": 9 00:15:50.868 } 00:15:50.868 }, 00:15:50.868 "base_bdevs_list": [ 00:15:50.868 { 00:15:50.868 "name": "spare", 00:15:50.868 "uuid": "00a00844-7e7e-5d02-8160-b4508b7f1aa7", 00:15:50.868 "is_configured": true, 00:15:50.868 "data_offset": 0, 00:15:50.868 "data_size": 65536 00:15:50.868 }, 00:15:50.868 { 00:15:50.868 "name": "BaseBdev2", 00:15:50.868 "uuid": "b97331e7-263e-5d77-ab86-70a807d05ffe", 00:15:50.868 "is_configured": true, 00:15:50.868 "data_offset": 0, 00:15:50.868 "data_size": 65536 00:15:50.868 }, 00:15:50.869 { 00:15:50.869 "name": "BaseBdev3", 00:15:50.869 "uuid": "afcda745-2f43-5c67-a416-5e63b2bdc99c", 00:15:50.869 "is_configured": true, 00:15:50.869 "data_offset": 0, 00:15:50.869 "data_size": 65536 00:15:50.869 }, 00:15:50.869 { 00:15:50.869 "name": "BaseBdev4", 00:15:50.869 "uuid": "1fb9db3e-55a0-59cd-948a-e8f98d2db962", 00:15:50.869 "is_configured": true, 00:15:50.869 "data_offset": 0, 00:15:50.869 "data_size": 65536 00:15:50.869 } 00:15:50.869 ] 00:15:50.869 }' 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=517 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.869 "name": "raid_bdev1", 00:15:50.869 "uuid": "0373a0e9-440e-437a-9489-80381dbda0b7", 00:15:50.869 "strip_size_kb": 64, 00:15:50.869 "state": "online", 00:15:50.869 "raid_level": "raid5f", 00:15:50.869 "superblock": false, 00:15:50.869 "num_base_bdevs": 4, 00:15:50.869 "num_base_bdevs_discovered": 4, 00:15:50.869 "num_base_bdevs_operational": 4, 00:15:50.869 "process": { 00:15:50.869 "type": "rebuild", 00:15:50.869 "target": "spare", 00:15:50.869 "progress": { 00:15:50.869 "blocks": 21120, 00:15:50.869 "percent": 10 00:15:50.869 } 00:15:50.869 }, 00:15:50.869 "base_bdevs_list": [ 00:15:50.869 { 00:15:50.869 "name": "spare", 00:15:50.869 "uuid": "00a00844-7e7e-5d02-8160-b4508b7f1aa7", 00:15:50.869 "is_configured": true, 00:15:50.869 "data_offset": 0, 00:15:50.869 "data_size": 65536 00:15:50.869 }, 00:15:50.869 { 00:15:50.869 "name": "BaseBdev2", 00:15:50.869 "uuid": "b97331e7-263e-5d77-ab86-70a807d05ffe", 00:15:50.869 "is_configured": true, 00:15:50.869 "data_offset": 0, 00:15:50.869 "data_size": 65536 00:15:50.869 }, 00:15:50.869 { 00:15:50.869 "name": "BaseBdev3", 00:15:50.869 "uuid": "afcda745-2f43-5c67-a416-5e63b2bdc99c", 00:15:50.869 "is_configured": true, 00:15:50.869 "data_offset": 0, 00:15:50.869 "data_size": 65536 00:15:50.869 }, 00:15:50.869 { 00:15:50.869 "name": "BaseBdev4", 00:15:50.869 "uuid": "1fb9db3e-55a0-59cd-948a-e8f98d2db962", 00:15:50.869 "is_configured": true, 00:15:50.869 "data_offset": 0, 00:15:50.869 "data_size": 65536 00:15:50.869 } 00:15:50.869 ] 00:15:50.869 }' 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.869 02:30:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:52.247 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:52.247 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.247 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.247 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.247 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.247 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.247 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.247 02:30:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.247 02:30:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.247 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.247 02:30:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.247 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.247 "name": "raid_bdev1", 00:15:52.247 "uuid": "0373a0e9-440e-437a-9489-80381dbda0b7", 00:15:52.247 "strip_size_kb": 64, 00:15:52.247 "state": "online", 00:15:52.247 "raid_level": "raid5f", 00:15:52.247 "superblock": false, 00:15:52.247 "num_base_bdevs": 4, 00:15:52.247 "num_base_bdevs_discovered": 4, 00:15:52.247 "num_base_bdevs_operational": 4, 00:15:52.247 "process": { 00:15:52.247 "type": "rebuild", 00:15:52.247 "target": "spare", 00:15:52.247 "progress": { 00:15:52.247 "blocks": 42240, 00:15:52.247 "percent": 21 00:15:52.247 } 00:15:52.247 }, 00:15:52.247 "base_bdevs_list": [ 00:15:52.247 { 00:15:52.247 "name": "spare", 00:15:52.247 "uuid": "00a00844-7e7e-5d02-8160-b4508b7f1aa7", 00:15:52.247 "is_configured": true, 00:15:52.247 "data_offset": 0, 00:15:52.247 "data_size": 65536 00:15:52.247 }, 00:15:52.247 { 00:15:52.247 "name": "BaseBdev2", 00:15:52.247 "uuid": "b97331e7-263e-5d77-ab86-70a807d05ffe", 00:15:52.247 "is_configured": true, 00:15:52.247 "data_offset": 0, 00:15:52.247 "data_size": 65536 00:15:52.247 }, 00:15:52.247 { 00:15:52.247 "name": "BaseBdev3", 00:15:52.247 "uuid": "afcda745-2f43-5c67-a416-5e63b2bdc99c", 00:15:52.247 "is_configured": true, 00:15:52.247 "data_offset": 0, 00:15:52.247 "data_size": 65536 00:15:52.247 }, 00:15:52.247 { 00:15:52.247 "name": "BaseBdev4", 00:15:52.247 "uuid": "1fb9db3e-55a0-59cd-948a-e8f98d2db962", 00:15:52.247 "is_configured": true, 00:15:52.247 "data_offset": 0, 00:15:52.247 "data_size": 65536 00:15:52.247 } 00:15:52.247 ] 00:15:52.247 }' 00:15:52.247 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.247 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.247 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.247 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.247 02:30:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:53.184 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:53.184 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.184 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.184 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.184 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.184 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.184 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.184 02:30:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.184 02:30:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.184 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.184 02:30:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.184 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.184 "name": "raid_bdev1", 00:15:53.184 "uuid": "0373a0e9-440e-437a-9489-80381dbda0b7", 00:15:53.184 "strip_size_kb": 64, 00:15:53.184 "state": "online", 00:15:53.184 "raid_level": "raid5f", 00:15:53.184 "superblock": false, 00:15:53.184 "num_base_bdevs": 4, 00:15:53.184 "num_base_bdevs_discovered": 4, 00:15:53.184 "num_base_bdevs_operational": 4, 00:15:53.184 "process": { 00:15:53.184 "type": "rebuild", 00:15:53.184 "target": "spare", 00:15:53.184 "progress": { 00:15:53.184 "blocks": 63360, 00:15:53.184 "percent": 32 00:15:53.184 } 00:15:53.184 }, 00:15:53.184 "base_bdevs_list": [ 00:15:53.184 { 00:15:53.184 "name": "spare", 00:15:53.184 "uuid": "00a00844-7e7e-5d02-8160-b4508b7f1aa7", 00:15:53.184 "is_configured": true, 00:15:53.184 "data_offset": 0, 00:15:53.184 "data_size": 65536 00:15:53.184 }, 00:15:53.184 { 00:15:53.184 "name": "BaseBdev2", 00:15:53.184 "uuid": "b97331e7-263e-5d77-ab86-70a807d05ffe", 00:15:53.184 "is_configured": true, 00:15:53.184 "data_offset": 0, 00:15:53.184 "data_size": 65536 00:15:53.184 }, 00:15:53.184 { 00:15:53.184 "name": "BaseBdev3", 00:15:53.184 "uuid": "afcda745-2f43-5c67-a416-5e63b2bdc99c", 00:15:53.184 "is_configured": true, 00:15:53.184 "data_offset": 0, 00:15:53.184 "data_size": 65536 00:15:53.184 }, 00:15:53.184 { 00:15:53.184 "name": "BaseBdev4", 00:15:53.184 "uuid": "1fb9db3e-55a0-59cd-948a-e8f98d2db962", 00:15:53.184 "is_configured": true, 00:15:53.184 "data_offset": 0, 00:15:53.184 "data_size": 65536 00:15:53.184 } 00:15:53.184 ] 00:15:53.184 }' 00:15:53.184 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.184 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.184 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.184 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.184 02:30:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:54.121 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:54.121 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.121 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.121 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.121 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.121 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.381 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.381 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.381 02:30:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.381 02:30:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.381 02:30:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.381 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.381 "name": "raid_bdev1", 00:15:54.381 "uuid": "0373a0e9-440e-437a-9489-80381dbda0b7", 00:15:54.381 "strip_size_kb": 64, 00:15:54.381 "state": "online", 00:15:54.381 "raid_level": "raid5f", 00:15:54.381 "superblock": false, 00:15:54.381 "num_base_bdevs": 4, 00:15:54.381 "num_base_bdevs_discovered": 4, 00:15:54.381 "num_base_bdevs_operational": 4, 00:15:54.381 "process": { 00:15:54.381 "type": "rebuild", 00:15:54.381 "target": "spare", 00:15:54.381 "progress": { 00:15:54.381 "blocks": 86400, 00:15:54.381 "percent": 43 00:15:54.381 } 00:15:54.381 }, 00:15:54.381 "base_bdevs_list": [ 00:15:54.381 { 00:15:54.381 "name": "spare", 00:15:54.381 "uuid": "00a00844-7e7e-5d02-8160-b4508b7f1aa7", 00:15:54.381 "is_configured": true, 00:15:54.381 "data_offset": 0, 00:15:54.381 "data_size": 65536 00:15:54.381 }, 00:15:54.381 { 00:15:54.381 "name": "BaseBdev2", 00:15:54.381 "uuid": "b97331e7-263e-5d77-ab86-70a807d05ffe", 00:15:54.381 "is_configured": true, 00:15:54.381 "data_offset": 0, 00:15:54.381 "data_size": 65536 00:15:54.381 }, 00:15:54.381 { 00:15:54.381 "name": "BaseBdev3", 00:15:54.381 "uuid": "afcda745-2f43-5c67-a416-5e63b2bdc99c", 00:15:54.381 "is_configured": true, 00:15:54.381 "data_offset": 0, 00:15:54.381 "data_size": 65536 00:15:54.381 }, 00:15:54.381 { 00:15:54.381 "name": "BaseBdev4", 00:15:54.381 "uuid": "1fb9db3e-55a0-59cd-948a-e8f98d2db962", 00:15:54.381 "is_configured": true, 00:15:54.381 "data_offset": 0, 00:15:54.381 "data_size": 65536 00:15:54.381 } 00:15:54.381 ] 00:15:54.381 }' 00:15:54.381 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.381 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.381 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.381 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.381 02:30:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:55.318 02:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:55.318 02:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.318 02:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.318 02:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.318 02:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.318 02:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.318 02:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.318 02:30:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.318 02:30:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.318 02:30:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.318 02:30:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.577 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.577 "name": "raid_bdev1", 00:15:55.577 "uuid": "0373a0e9-440e-437a-9489-80381dbda0b7", 00:15:55.577 "strip_size_kb": 64, 00:15:55.577 "state": "online", 00:15:55.577 "raid_level": "raid5f", 00:15:55.577 "superblock": false, 00:15:55.577 "num_base_bdevs": 4, 00:15:55.577 "num_base_bdevs_discovered": 4, 00:15:55.577 "num_base_bdevs_operational": 4, 00:15:55.577 "process": { 00:15:55.577 "type": "rebuild", 00:15:55.577 "target": "spare", 00:15:55.577 "progress": { 00:15:55.577 "blocks": 107520, 00:15:55.577 "percent": 54 00:15:55.577 } 00:15:55.577 }, 00:15:55.577 "base_bdevs_list": [ 00:15:55.577 { 00:15:55.577 "name": "spare", 00:15:55.577 "uuid": "00a00844-7e7e-5d02-8160-b4508b7f1aa7", 00:15:55.577 "is_configured": true, 00:15:55.577 "data_offset": 0, 00:15:55.577 "data_size": 65536 00:15:55.577 }, 00:15:55.577 { 00:15:55.577 "name": "BaseBdev2", 00:15:55.577 "uuid": "b97331e7-263e-5d77-ab86-70a807d05ffe", 00:15:55.577 "is_configured": true, 00:15:55.577 "data_offset": 0, 00:15:55.577 "data_size": 65536 00:15:55.577 }, 00:15:55.577 { 00:15:55.577 "name": "BaseBdev3", 00:15:55.577 "uuid": "afcda745-2f43-5c67-a416-5e63b2bdc99c", 00:15:55.577 "is_configured": true, 00:15:55.577 "data_offset": 0, 00:15:55.577 "data_size": 65536 00:15:55.577 }, 00:15:55.577 { 00:15:55.577 "name": "BaseBdev4", 00:15:55.577 "uuid": "1fb9db3e-55a0-59cd-948a-e8f98d2db962", 00:15:55.577 "is_configured": true, 00:15:55.577 "data_offset": 0, 00:15:55.577 "data_size": 65536 00:15:55.577 } 00:15:55.577 ] 00:15:55.577 }' 00:15:55.577 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.577 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.577 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.577 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.577 02:30:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:56.514 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:56.514 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.514 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.514 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.514 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.514 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.514 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.514 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.514 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.514 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.514 02:30:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.514 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.514 "name": "raid_bdev1", 00:15:56.514 "uuid": "0373a0e9-440e-437a-9489-80381dbda0b7", 00:15:56.514 "strip_size_kb": 64, 00:15:56.514 "state": "online", 00:15:56.514 "raid_level": "raid5f", 00:15:56.514 "superblock": false, 00:15:56.514 "num_base_bdevs": 4, 00:15:56.514 "num_base_bdevs_discovered": 4, 00:15:56.514 "num_base_bdevs_operational": 4, 00:15:56.514 "process": { 00:15:56.514 "type": "rebuild", 00:15:56.514 "target": "spare", 00:15:56.514 "progress": { 00:15:56.514 "blocks": 128640, 00:15:56.514 "percent": 65 00:15:56.514 } 00:15:56.514 }, 00:15:56.514 "base_bdevs_list": [ 00:15:56.514 { 00:15:56.514 "name": "spare", 00:15:56.514 "uuid": "00a00844-7e7e-5d02-8160-b4508b7f1aa7", 00:15:56.514 "is_configured": true, 00:15:56.514 "data_offset": 0, 00:15:56.514 "data_size": 65536 00:15:56.514 }, 00:15:56.514 { 00:15:56.514 "name": "BaseBdev2", 00:15:56.514 "uuid": "b97331e7-263e-5d77-ab86-70a807d05ffe", 00:15:56.514 "is_configured": true, 00:15:56.514 "data_offset": 0, 00:15:56.514 "data_size": 65536 00:15:56.514 }, 00:15:56.514 { 00:15:56.514 "name": "BaseBdev3", 00:15:56.514 "uuid": "afcda745-2f43-5c67-a416-5e63b2bdc99c", 00:15:56.514 "is_configured": true, 00:15:56.514 "data_offset": 0, 00:15:56.514 "data_size": 65536 00:15:56.514 }, 00:15:56.514 { 00:15:56.514 "name": "BaseBdev4", 00:15:56.514 "uuid": "1fb9db3e-55a0-59cd-948a-e8f98d2db962", 00:15:56.514 "is_configured": true, 00:15:56.514 "data_offset": 0, 00:15:56.514 "data_size": 65536 00:15:56.514 } 00:15:56.514 ] 00:15:56.514 }' 00:15:56.514 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.514 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.514 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.774 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.774 02:30:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:57.711 02:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.711 02:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.711 02:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.711 02:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.711 02:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.711 02:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.711 02:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.711 02:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.711 02:30:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.711 02:30:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.711 02:30:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.711 02:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.711 "name": "raid_bdev1", 00:15:57.711 "uuid": "0373a0e9-440e-437a-9489-80381dbda0b7", 00:15:57.711 "strip_size_kb": 64, 00:15:57.711 "state": "online", 00:15:57.711 "raid_level": "raid5f", 00:15:57.711 "superblock": false, 00:15:57.711 "num_base_bdevs": 4, 00:15:57.711 "num_base_bdevs_discovered": 4, 00:15:57.711 "num_base_bdevs_operational": 4, 00:15:57.711 "process": { 00:15:57.711 "type": "rebuild", 00:15:57.711 "target": "spare", 00:15:57.711 "progress": { 00:15:57.711 "blocks": 151680, 00:15:57.711 "percent": 77 00:15:57.711 } 00:15:57.711 }, 00:15:57.711 "base_bdevs_list": [ 00:15:57.711 { 00:15:57.711 "name": "spare", 00:15:57.711 "uuid": "00a00844-7e7e-5d02-8160-b4508b7f1aa7", 00:15:57.711 "is_configured": true, 00:15:57.711 "data_offset": 0, 00:15:57.711 "data_size": 65536 00:15:57.711 }, 00:15:57.711 { 00:15:57.711 "name": "BaseBdev2", 00:15:57.711 "uuid": "b97331e7-263e-5d77-ab86-70a807d05ffe", 00:15:57.711 "is_configured": true, 00:15:57.711 "data_offset": 0, 00:15:57.711 "data_size": 65536 00:15:57.711 }, 00:15:57.711 { 00:15:57.711 "name": "BaseBdev3", 00:15:57.711 "uuid": "afcda745-2f43-5c67-a416-5e63b2bdc99c", 00:15:57.711 "is_configured": true, 00:15:57.711 "data_offset": 0, 00:15:57.711 "data_size": 65536 00:15:57.711 }, 00:15:57.711 { 00:15:57.711 "name": "BaseBdev4", 00:15:57.711 "uuid": "1fb9db3e-55a0-59cd-948a-e8f98d2db962", 00:15:57.711 "is_configured": true, 00:15:57.711 "data_offset": 0, 00:15:57.711 "data_size": 65536 00:15:57.711 } 00:15:57.711 ] 00:15:57.711 }' 00:15:57.711 02:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.711 02:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.711 02:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.711 02:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.711 02:30:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:59.100 02:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.100 02:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.100 02:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.100 02:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.100 02:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.100 02:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.100 02:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.100 02:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.100 02:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.100 02:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.100 02:30:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.100 02:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.100 "name": "raid_bdev1", 00:15:59.100 "uuid": "0373a0e9-440e-437a-9489-80381dbda0b7", 00:15:59.100 "strip_size_kb": 64, 00:15:59.100 "state": "online", 00:15:59.100 "raid_level": "raid5f", 00:15:59.100 "superblock": false, 00:15:59.100 "num_base_bdevs": 4, 00:15:59.100 "num_base_bdevs_discovered": 4, 00:15:59.100 "num_base_bdevs_operational": 4, 00:15:59.100 "process": { 00:15:59.100 "type": "rebuild", 00:15:59.100 "target": "spare", 00:15:59.100 "progress": { 00:15:59.100 "blocks": 172800, 00:15:59.100 "percent": 87 00:15:59.100 } 00:15:59.100 }, 00:15:59.100 "base_bdevs_list": [ 00:15:59.100 { 00:15:59.100 "name": "spare", 00:15:59.100 "uuid": "00a00844-7e7e-5d02-8160-b4508b7f1aa7", 00:15:59.100 "is_configured": true, 00:15:59.100 "data_offset": 0, 00:15:59.100 "data_size": 65536 00:15:59.100 }, 00:15:59.100 { 00:15:59.100 "name": "BaseBdev2", 00:15:59.100 "uuid": "b97331e7-263e-5d77-ab86-70a807d05ffe", 00:15:59.100 "is_configured": true, 00:15:59.100 "data_offset": 0, 00:15:59.100 "data_size": 65536 00:15:59.100 }, 00:15:59.100 { 00:15:59.100 "name": "BaseBdev3", 00:15:59.100 "uuid": "afcda745-2f43-5c67-a416-5e63b2bdc99c", 00:15:59.100 "is_configured": true, 00:15:59.100 "data_offset": 0, 00:15:59.100 "data_size": 65536 00:15:59.100 }, 00:15:59.100 { 00:15:59.100 "name": "BaseBdev4", 00:15:59.100 "uuid": "1fb9db3e-55a0-59cd-948a-e8f98d2db962", 00:15:59.101 "is_configured": true, 00:15:59.101 "data_offset": 0, 00:15:59.101 "data_size": 65536 00:15:59.101 } 00:15:59.101 ] 00:15:59.101 }' 00:15:59.101 02:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.101 02:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.101 02:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.101 02:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.101 02:30:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.048 02:30:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.048 02:30:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.048 02:30:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.048 02:30:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.048 02:30:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.048 02:30:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.048 02:30:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.048 02:30:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.048 02:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.048 02:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.048 02:30:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.048 02:30:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.048 "name": "raid_bdev1", 00:16:00.048 "uuid": "0373a0e9-440e-437a-9489-80381dbda0b7", 00:16:00.048 "strip_size_kb": 64, 00:16:00.048 "state": "online", 00:16:00.048 "raid_level": "raid5f", 00:16:00.048 "superblock": false, 00:16:00.048 "num_base_bdevs": 4, 00:16:00.048 "num_base_bdevs_discovered": 4, 00:16:00.048 "num_base_bdevs_operational": 4, 00:16:00.048 "process": { 00:16:00.048 "type": "rebuild", 00:16:00.048 "target": "spare", 00:16:00.048 "progress": { 00:16:00.048 "blocks": 193920, 00:16:00.048 "percent": 98 00:16:00.048 } 00:16:00.048 }, 00:16:00.048 "base_bdevs_list": [ 00:16:00.048 { 00:16:00.048 "name": "spare", 00:16:00.048 "uuid": "00a00844-7e7e-5d02-8160-b4508b7f1aa7", 00:16:00.048 "is_configured": true, 00:16:00.048 "data_offset": 0, 00:16:00.048 "data_size": 65536 00:16:00.048 }, 00:16:00.048 { 00:16:00.048 "name": "BaseBdev2", 00:16:00.048 "uuid": "b97331e7-263e-5d77-ab86-70a807d05ffe", 00:16:00.048 "is_configured": true, 00:16:00.048 "data_offset": 0, 00:16:00.048 "data_size": 65536 00:16:00.048 }, 00:16:00.048 { 00:16:00.048 "name": "BaseBdev3", 00:16:00.048 "uuid": "afcda745-2f43-5c67-a416-5e63b2bdc99c", 00:16:00.048 "is_configured": true, 00:16:00.048 "data_offset": 0, 00:16:00.048 "data_size": 65536 00:16:00.048 }, 00:16:00.048 { 00:16:00.048 "name": "BaseBdev4", 00:16:00.048 "uuid": "1fb9db3e-55a0-59cd-948a-e8f98d2db962", 00:16:00.048 "is_configured": true, 00:16:00.048 "data_offset": 0, 00:16:00.048 "data_size": 65536 00:16:00.048 } 00:16:00.048 ] 00:16:00.048 }' 00:16:00.048 02:30:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.048 02:30:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.048 02:30:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.048 [2024-10-13 02:30:18.636437] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:00.048 [2024-10-13 02:30:18.636649] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:00.048 [2024-10-13 02:30:18.636703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.048 02:30:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.048 02:30:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.984 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.984 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.984 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.984 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.984 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.984 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.244 "name": "raid_bdev1", 00:16:01.244 "uuid": "0373a0e9-440e-437a-9489-80381dbda0b7", 00:16:01.244 "strip_size_kb": 64, 00:16:01.244 "state": "online", 00:16:01.244 "raid_level": "raid5f", 00:16:01.244 "superblock": false, 00:16:01.244 "num_base_bdevs": 4, 00:16:01.244 "num_base_bdevs_discovered": 4, 00:16:01.244 "num_base_bdevs_operational": 4, 00:16:01.244 "base_bdevs_list": [ 00:16:01.244 { 00:16:01.244 "name": "spare", 00:16:01.244 "uuid": "00a00844-7e7e-5d02-8160-b4508b7f1aa7", 00:16:01.244 "is_configured": true, 00:16:01.244 "data_offset": 0, 00:16:01.244 "data_size": 65536 00:16:01.244 }, 00:16:01.244 { 00:16:01.244 "name": "BaseBdev2", 00:16:01.244 "uuid": "b97331e7-263e-5d77-ab86-70a807d05ffe", 00:16:01.244 "is_configured": true, 00:16:01.244 "data_offset": 0, 00:16:01.244 "data_size": 65536 00:16:01.244 }, 00:16:01.244 { 00:16:01.244 "name": "BaseBdev3", 00:16:01.244 "uuid": "afcda745-2f43-5c67-a416-5e63b2bdc99c", 00:16:01.244 "is_configured": true, 00:16:01.244 "data_offset": 0, 00:16:01.244 "data_size": 65536 00:16:01.244 }, 00:16:01.244 { 00:16:01.244 "name": "BaseBdev4", 00:16:01.244 "uuid": "1fb9db3e-55a0-59cd-948a-e8f98d2db962", 00:16:01.244 "is_configured": true, 00:16:01.244 "data_offset": 0, 00:16:01.244 "data_size": 65536 00:16:01.244 } 00:16:01.244 ] 00:16:01.244 }' 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.244 "name": "raid_bdev1", 00:16:01.244 "uuid": "0373a0e9-440e-437a-9489-80381dbda0b7", 00:16:01.244 "strip_size_kb": 64, 00:16:01.244 "state": "online", 00:16:01.244 "raid_level": "raid5f", 00:16:01.244 "superblock": false, 00:16:01.244 "num_base_bdevs": 4, 00:16:01.244 "num_base_bdevs_discovered": 4, 00:16:01.244 "num_base_bdevs_operational": 4, 00:16:01.244 "base_bdevs_list": [ 00:16:01.244 { 00:16:01.244 "name": "spare", 00:16:01.244 "uuid": "00a00844-7e7e-5d02-8160-b4508b7f1aa7", 00:16:01.244 "is_configured": true, 00:16:01.244 "data_offset": 0, 00:16:01.244 "data_size": 65536 00:16:01.244 }, 00:16:01.244 { 00:16:01.244 "name": "BaseBdev2", 00:16:01.244 "uuid": "b97331e7-263e-5d77-ab86-70a807d05ffe", 00:16:01.244 "is_configured": true, 00:16:01.244 "data_offset": 0, 00:16:01.244 "data_size": 65536 00:16:01.244 }, 00:16:01.244 { 00:16:01.244 "name": "BaseBdev3", 00:16:01.244 "uuid": "afcda745-2f43-5c67-a416-5e63b2bdc99c", 00:16:01.244 "is_configured": true, 00:16:01.244 "data_offset": 0, 00:16:01.244 "data_size": 65536 00:16:01.244 }, 00:16:01.244 { 00:16:01.244 "name": "BaseBdev4", 00:16:01.244 "uuid": "1fb9db3e-55a0-59cd-948a-e8f98d2db962", 00:16:01.244 "is_configured": true, 00:16:01.244 "data_offset": 0, 00:16:01.244 "data_size": 65536 00:16:01.244 } 00:16:01.244 ] 00:16:01.244 }' 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.244 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:01.245 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.245 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:01.245 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:01.245 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.245 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.245 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.245 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.245 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.245 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.245 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.245 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.245 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.245 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.245 02:30:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.245 02:30:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.245 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.504 02:30:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.504 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.504 "name": "raid_bdev1", 00:16:01.504 "uuid": "0373a0e9-440e-437a-9489-80381dbda0b7", 00:16:01.504 "strip_size_kb": 64, 00:16:01.504 "state": "online", 00:16:01.504 "raid_level": "raid5f", 00:16:01.504 "superblock": false, 00:16:01.504 "num_base_bdevs": 4, 00:16:01.504 "num_base_bdevs_discovered": 4, 00:16:01.504 "num_base_bdevs_operational": 4, 00:16:01.504 "base_bdevs_list": [ 00:16:01.504 { 00:16:01.504 "name": "spare", 00:16:01.504 "uuid": "00a00844-7e7e-5d02-8160-b4508b7f1aa7", 00:16:01.504 "is_configured": true, 00:16:01.504 "data_offset": 0, 00:16:01.504 "data_size": 65536 00:16:01.504 }, 00:16:01.504 { 00:16:01.504 "name": "BaseBdev2", 00:16:01.504 "uuid": "b97331e7-263e-5d77-ab86-70a807d05ffe", 00:16:01.504 "is_configured": true, 00:16:01.504 "data_offset": 0, 00:16:01.504 "data_size": 65536 00:16:01.504 }, 00:16:01.504 { 00:16:01.504 "name": "BaseBdev3", 00:16:01.504 "uuid": "afcda745-2f43-5c67-a416-5e63b2bdc99c", 00:16:01.504 "is_configured": true, 00:16:01.504 "data_offset": 0, 00:16:01.504 "data_size": 65536 00:16:01.504 }, 00:16:01.504 { 00:16:01.504 "name": "BaseBdev4", 00:16:01.504 "uuid": "1fb9db3e-55a0-59cd-948a-e8f98d2db962", 00:16:01.504 "is_configured": true, 00:16:01.504 "data_offset": 0, 00:16:01.504 "data_size": 65536 00:16:01.504 } 00:16:01.504 ] 00:16:01.504 }' 00:16:01.504 02:30:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.504 02:30:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.763 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.763 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.763 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.764 [2024-10-13 02:30:20.371358] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.764 [2024-10-13 02:30:20.371401] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.764 [2024-10-13 02:30:20.371521] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.764 [2024-10-13 02:30:20.371615] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.764 [2024-10-13 02:30:20.371639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:01.764 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:02.023 /dev/nbd0 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.023 1+0 records in 00:16:02.023 1+0 records out 00:16:02.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419232 s, 9.8 MB/s 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:02.023 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:02.282 /dev/nbd1 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.282 1+0 records in 00:16:02.282 1+0 records out 00:16:02.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393029 s, 10.4 MB/s 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:02.282 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:02.541 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:02.541 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:02.541 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:02.541 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:02.541 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:02.541 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.541 02:30:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:02.541 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:02.541 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:02.541 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:02.541 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.541 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.541 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:02.541 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:02.541 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.541 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.541 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:02.801 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:02.801 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:02.801 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:02.801 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.801 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.801 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:02.801 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:02.801 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.801 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:02.801 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 94956 00:16:02.801 02:30:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 94956 ']' 00:16:02.801 02:30:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 94956 00:16:02.801 02:30:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:16:02.801 02:30:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:02.801 02:30:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94956 00:16:03.059 02:30:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:03.059 02:30:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:03.059 02:30:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94956' 00:16:03.059 killing process with pid 94956 00:16:03.059 02:30:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 94956 00:16:03.059 Received shutdown signal, test time was about 60.000000 seconds 00:16:03.059 00:16:03.059 Latency(us) 00:16:03.059 [2024-10-13T02:30:21.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.059 [2024-10-13T02:30:21.743Z] =================================================================================================================== 00:16:03.059 [2024-10-13T02:30:21.743Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:03.059 [2024-10-13 02:30:21.512814] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:03.059 02:30:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 94956 00:16:03.059 [2024-10-13 02:30:21.563609] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:03.319 00:16:03.319 real 0m18.338s 00:16:03.319 user 0m22.152s 00:16:03.319 sys 0m2.233s 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.319 ************************************ 00:16:03.319 END TEST raid5f_rebuild_test 00:16:03.319 ************************************ 00:16:03.319 02:30:21 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:03.319 02:30:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:03.319 02:30:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:03.319 02:30:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:03.319 ************************************ 00:16:03.319 START TEST raid5f_rebuild_test_sb 00:16:03.319 ************************************ 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95458 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95458 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 95458 ']' 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:03.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:03.319 02:30:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.319 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:03.319 Zero copy mechanism will not be used. 00:16:03.319 [2024-10-13 02:30:21.963003] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:03.319 [2024-10-13 02:30:21.963165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95458 ] 00:16:03.578 [2024-10-13 02:30:22.111992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.578 [2024-10-13 02:30:22.163280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.578 [2024-10-13 02:30:22.205268] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.578 [2024-10-13 02:30:22.205317] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.146 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:04.146 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:04.146 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:04.146 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:04.146 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.146 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.146 BaseBdev1_malloc 00:16:04.146 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.146 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:04.146 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.146 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.146 [2024-10-13 02:30:22.827820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:04.146 [2024-10-13 02:30:22.827907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.146 [2024-10-13 02:30:22.827939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:04.146 [2024-10-13 02:30:22.827953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.405 [2024-10-13 02:30:22.829989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.405 [2024-10-13 02:30:22.830036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:04.405 BaseBdev1 00:16:04.405 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.405 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:04.405 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:04.405 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.405 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.405 BaseBdev2_malloc 00:16:04.405 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.405 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:04.405 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.405 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.405 [2024-10-13 02:30:22.868950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:04.405 [2024-10-13 02:30:22.869011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.405 [2024-10-13 02:30:22.869033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:04.405 [2024-10-13 02:30:22.869043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.405 [2024-10-13 02:30:22.871266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.405 [2024-10-13 02:30:22.871305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:04.405 BaseBdev2 00:16:04.405 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.405 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:04.405 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:04.405 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.405 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.405 BaseBdev3_malloc 00:16:04.405 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.405 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:04.405 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.405 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.405 [2024-10-13 02:30:22.897760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:04.406 [2024-10-13 02:30:22.897829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.406 [2024-10-13 02:30:22.897855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:04.406 [2024-10-13 02:30:22.897864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.406 [2024-10-13 02:30:22.899918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.406 [2024-10-13 02:30:22.899977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:04.406 BaseBdev3 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.406 BaseBdev4_malloc 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.406 [2024-10-13 02:30:22.926450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:04.406 [2024-10-13 02:30:22.926528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.406 [2024-10-13 02:30:22.926551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:04.406 [2024-10-13 02:30:22.926559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.406 [2024-10-13 02:30:22.928607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.406 [2024-10-13 02:30:22.928651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:04.406 BaseBdev4 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.406 spare_malloc 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.406 spare_delay 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.406 [2024-10-13 02:30:22.966969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:04.406 [2024-10-13 02:30:22.967050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.406 [2024-10-13 02:30:22.967070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:04.406 [2024-10-13 02:30:22.967079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.406 [2024-10-13 02:30:22.969121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.406 [2024-10-13 02:30:22.969162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:04.406 spare 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.406 [2024-10-13 02:30:22.979033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.406 [2024-10-13 02:30:22.980834] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.406 [2024-10-13 02:30:22.980934] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:04.406 [2024-10-13 02:30:22.980984] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:04.406 [2024-10-13 02:30:22.981154] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:04.406 [2024-10-13 02:30:22.981173] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:04.406 [2024-10-13 02:30:22.981430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:04.406 [2024-10-13 02:30:22.981884] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:04.406 [2024-10-13 02:30:22.981906] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:04.406 [2024-10-13 02:30:22.982035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.406 02:30:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.406 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.406 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.406 "name": "raid_bdev1", 00:16:04.406 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:04.406 "strip_size_kb": 64, 00:16:04.406 "state": "online", 00:16:04.406 "raid_level": "raid5f", 00:16:04.406 "superblock": true, 00:16:04.406 "num_base_bdevs": 4, 00:16:04.406 "num_base_bdevs_discovered": 4, 00:16:04.406 "num_base_bdevs_operational": 4, 00:16:04.406 "base_bdevs_list": [ 00:16:04.406 { 00:16:04.406 "name": "BaseBdev1", 00:16:04.406 "uuid": "45aa6f2c-b025-5f47-9135-0fd2e1bf882d", 00:16:04.406 "is_configured": true, 00:16:04.406 "data_offset": 2048, 00:16:04.406 "data_size": 63488 00:16:04.406 }, 00:16:04.406 { 00:16:04.406 "name": "BaseBdev2", 00:16:04.406 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:04.406 "is_configured": true, 00:16:04.406 "data_offset": 2048, 00:16:04.406 "data_size": 63488 00:16:04.406 }, 00:16:04.406 { 00:16:04.406 "name": "BaseBdev3", 00:16:04.406 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:04.406 "is_configured": true, 00:16:04.406 "data_offset": 2048, 00:16:04.406 "data_size": 63488 00:16:04.406 }, 00:16:04.406 { 00:16:04.406 "name": "BaseBdev4", 00:16:04.406 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:04.406 "is_configured": true, 00:16:04.406 "data_offset": 2048, 00:16:04.406 "data_size": 63488 00:16:04.406 } 00:16:04.406 ] 00:16:04.406 }' 00:16:04.406 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.406 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.974 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.974 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.974 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.974 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:04.974 [2024-10-13 02:30:23.439174] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.974 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.974 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:04.974 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.974 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.974 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.974 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:04.974 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.974 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:04.974 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:04.974 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:04.974 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:04.974 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:04.974 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.974 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:04.975 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:04.975 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:04.975 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:04.975 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:04.975 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:04.975 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.975 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:05.233 [2024-10-13 02:30:23.746495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:16:05.233 /dev/nbd0 00:16:05.233 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:05.233 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.234 1+0 records in 00:16:05.234 1+0 records out 00:16:05.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284609 s, 14.4 MB/s 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:05.234 02:30:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:05.801 496+0 records in 00:16:05.801 496+0 records out 00:16:05.801 97517568 bytes (98 MB, 93 MiB) copied, 0.433334 s, 225 MB/s 00:16:05.801 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:05.801 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.801 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:05.801 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:05.801 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:05.801 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.801 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:06.060 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:06.060 [2024-10-13 02:30:24.493202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.060 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:06.060 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:06.060 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.060 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.060 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:06.060 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:06.060 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.060 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:06.060 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.060 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.060 [2024-10-13 02:30:24.513277] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:06.060 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.060 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:06.060 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.060 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.060 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.061 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.061 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.061 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.061 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.061 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.061 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.061 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.061 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.061 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.061 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.061 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.061 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.061 "name": "raid_bdev1", 00:16:06.061 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:06.061 "strip_size_kb": 64, 00:16:06.061 "state": "online", 00:16:06.061 "raid_level": "raid5f", 00:16:06.061 "superblock": true, 00:16:06.061 "num_base_bdevs": 4, 00:16:06.061 "num_base_bdevs_discovered": 3, 00:16:06.061 "num_base_bdevs_operational": 3, 00:16:06.061 "base_bdevs_list": [ 00:16:06.061 { 00:16:06.061 "name": null, 00:16:06.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.061 "is_configured": false, 00:16:06.061 "data_offset": 0, 00:16:06.061 "data_size": 63488 00:16:06.061 }, 00:16:06.061 { 00:16:06.061 "name": "BaseBdev2", 00:16:06.061 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:06.061 "is_configured": true, 00:16:06.061 "data_offset": 2048, 00:16:06.061 "data_size": 63488 00:16:06.061 }, 00:16:06.061 { 00:16:06.061 "name": "BaseBdev3", 00:16:06.061 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:06.061 "is_configured": true, 00:16:06.061 "data_offset": 2048, 00:16:06.061 "data_size": 63488 00:16:06.061 }, 00:16:06.061 { 00:16:06.061 "name": "BaseBdev4", 00:16:06.061 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:06.061 "is_configured": true, 00:16:06.061 "data_offset": 2048, 00:16:06.061 "data_size": 63488 00:16:06.061 } 00:16:06.061 ] 00:16:06.061 }' 00:16:06.061 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.061 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.320 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:06.320 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.320 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.320 [2024-10-13 02:30:24.992488] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.320 [2024-10-13 02:30:24.995997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:16:06.320 [2024-10-13 02:30:24.998334] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:06.320 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.320 02:30:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.698 "name": "raid_bdev1", 00:16:07.698 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:07.698 "strip_size_kb": 64, 00:16:07.698 "state": "online", 00:16:07.698 "raid_level": "raid5f", 00:16:07.698 "superblock": true, 00:16:07.698 "num_base_bdevs": 4, 00:16:07.698 "num_base_bdevs_discovered": 4, 00:16:07.698 "num_base_bdevs_operational": 4, 00:16:07.698 "process": { 00:16:07.698 "type": "rebuild", 00:16:07.698 "target": "spare", 00:16:07.698 "progress": { 00:16:07.698 "blocks": 19200, 00:16:07.698 "percent": 10 00:16:07.698 } 00:16:07.698 }, 00:16:07.698 "base_bdevs_list": [ 00:16:07.698 { 00:16:07.698 "name": "spare", 00:16:07.698 "uuid": "3d195646-b2d2-512b-ab70-b095a4e80a50", 00:16:07.698 "is_configured": true, 00:16:07.698 "data_offset": 2048, 00:16:07.698 "data_size": 63488 00:16:07.698 }, 00:16:07.698 { 00:16:07.698 "name": "BaseBdev2", 00:16:07.698 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:07.698 "is_configured": true, 00:16:07.698 "data_offset": 2048, 00:16:07.698 "data_size": 63488 00:16:07.698 }, 00:16:07.698 { 00:16:07.698 "name": "BaseBdev3", 00:16:07.698 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:07.698 "is_configured": true, 00:16:07.698 "data_offset": 2048, 00:16:07.698 "data_size": 63488 00:16:07.698 }, 00:16:07.698 { 00:16:07.698 "name": "BaseBdev4", 00:16:07.698 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:07.698 "is_configured": true, 00:16:07.698 "data_offset": 2048, 00:16:07.698 "data_size": 63488 00:16:07.698 } 00:16:07.698 ] 00:16:07.698 }' 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.698 [2024-10-13 02:30:26.169279] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.698 [2024-10-13 02:30:26.206879] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:07.698 [2024-10-13 02:30:26.206976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.698 [2024-10-13 02:30:26.206997] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.698 [2024-10-13 02:30:26.207019] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.698 "name": "raid_bdev1", 00:16:07.698 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:07.698 "strip_size_kb": 64, 00:16:07.698 "state": "online", 00:16:07.698 "raid_level": "raid5f", 00:16:07.698 "superblock": true, 00:16:07.698 "num_base_bdevs": 4, 00:16:07.698 "num_base_bdevs_discovered": 3, 00:16:07.698 "num_base_bdevs_operational": 3, 00:16:07.698 "base_bdevs_list": [ 00:16:07.698 { 00:16:07.698 "name": null, 00:16:07.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.698 "is_configured": false, 00:16:07.698 "data_offset": 0, 00:16:07.698 "data_size": 63488 00:16:07.698 }, 00:16:07.698 { 00:16:07.698 "name": "BaseBdev2", 00:16:07.698 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:07.698 "is_configured": true, 00:16:07.698 "data_offset": 2048, 00:16:07.698 "data_size": 63488 00:16:07.698 }, 00:16:07.698 { 00:16:07.698 "name": "BaseBdev3", 00:16:07.698 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:07.698 "is_configured": true, 00:16:07.698 "data_offset": 2048, 00:16:07.698 "data_size": 63488 00:16:07.698 }, 00:16:07.698 { 00:16:07.698 "name": "BaseBdev4", 00:16:07.698 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:07.698 "is_configured": true, 00:16:07.698 "data_offset": 2048, 00:16:07.698 "data_size": 63488 00:16:07.698 } 00:16:07.698 ] 00:16:07.698 }' 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.698 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.267 "name": "raid_bdev1", 00:16:08.267 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:08.267 "strip_size_kb": 64, 00:16:08.267 "state": "online", 00:16:08.267 "raid_level": "raid5f", 00:16:08.267 "superblock": true, 00:16:08.267 "num_base_bdevs": 4, 00:16:08.267 "num_base_bdevs_discovered": 3, 00:16:08.267 "num_base_bdevs_operational": 3, 00:16:08.267 "base_bdevs_list": [ 00:16:08.267 { 00:16:08.267 "name": null, 00:16:08.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.267 "is_configured": false, 00:16:08.267 "data_offset": 0, 00:16:08.267 "data_size": 63488 00:16:08.267 }, 00:16:08.267 { 00:16:08.267 "name": "BaseBdev2", 00:16:08.267 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:08.267 "is_configured": true, 00:16:08.267 "data_offset": 2048, 00:16:08.267 "data_size": 63488 00:16:08.267 }, 00:16:08.267 { 00:16:08.267 "name": "BaseBdev3", 00:16:08.267 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:08.267 "is_configured": true, 00:16:08.267 "data_offset": 2048, 00:16:08.267 "data_size": 63488 00:16:08.267 }, 00:16:08.267 { 00:16:08.267 "name": "BaseBdev4", 00:16:08.267 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:08.267 "is_configured": true, 00:16:08.267 "data_offset": 2048, 00:16:08.267 "data_size": 63488 00:16:08.267 } 00:16:08.267 ] 00:16:08.267 }' 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.267 [2024-10-13 02:30:26.835789] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:08.267 [2024-10-13 02:30:26.839209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027170 00:16:08.267 [2024-10-13 02:30:26.841548] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.267 02:30:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:09.203 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.203 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.203 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.203 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.203 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.203 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.203 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.203 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.203 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.203 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.463 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.463 "name": "raid_bdev1", 00:16:09.463 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:09.463 "strip_size_kb": 64, 00:16:09.463 "state": "online", 00:16:09.463 "raid_level": "raid5f", 00:16:09.463 "superblock": true, 00:16:09.463 "num_base_bdevs": 4, 00:16:09.463 "num_base_bdevs_discovered": 4, 00:16:09.463 "num_base_bdevs_operational": 4, 00:16:09.463 "process": { 00:16:09.463 "type": "rebuild", 00:16:09.463 "target": "spare", 00:16:09.463 "progress": { 00:16:09.463 "blocks": 19200, 00:16:09.463 "percent": 10 00:16:09.463 } 00:16:09.463 }, 00:16:09.463 "base_bdevs_list": [ 00:16:09.463 { 00:16:09.463 "name": "spare", 00:16:09.463 "uuid": "3d195646-b2d2-512b-ab70-b095a4e80a50", 00:16:09.463 "is_configured": true, 00:16:09.463 "data_offset": 2048, 00:16:09.463 "data_size": 63488 00:16:09.463 }, 00:16:09.463 { 00:16:09.463 "name": "BaseBdev2", 00:16:09.463 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:09.463 "is_configured": true, 00:16:09.463 "data_offset": 2048, 00:16:09.463 "data_size": 63488 00:16:09.463 }, 00:16:09.463 { 00:16:09.463 "name": "BaseBdev3", 00:16:09.463 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:09.463 "is_configured": true, 00:16:09.463 "data_offset": 2048, 00:16:09.463 "data_size": 63488 00:16:09.463 }, 00:16:09.463 { 00:16:09.463 "name": "BaseBdev4", 00:16:09.463 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:09.463 "is_configured": true, 00:16:09.463 "data_offset": 2048, 00:16:09.463 "data_size": 63488 00:16:09.463 } 00:16:09.463 ] 00:16:09.463 }' 00:16:09.463 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.463 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.463 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.463 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.463 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:09.463 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:09.463 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:09.463 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:09.463 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:09.463 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=535 00:16:09.463 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.463 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.463 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.463 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.463 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.463 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.463 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.463 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.464 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.464 02:30:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.464 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.464 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.464 "name": "raid_bdev1", 00:16:09.464 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:09.464 "strip_size_kb": 64, 00:16:09.464 "state": "online", 00:16:09.464 "raid_level": "raid5f", 00:16:09.464 "superblock": true, 00:16:09.464 "num_base_bdevs": 4, 00:16:09.464 "num_base_bdevs_discovered": 4, 00:16:09.464 "num_base_bdevs_operational": 4, 00:16:09.464 "process": { 00:16:09.464 "type": "rebuild", 00:16:09.464 "target": "spare", 00:16:09.464 "progress": { 00:16:09.464 "blocks": 21120, 00:16:09.464 "percent": 11 00:16:09.464 } 00:16:09.464 }, 00:16:09.464 "base_bdevs_list": [ 00:16:09.464 { 00:16:09.464 "name": "spare", 00:16:09.464 "uuid": "3d195646-b2d2-512b-ab70-b095a4e80a50", 00:16:09.464 "is_configured": true, 00:16:09.464 "data_offset": 2048, 00:16:09.464 "data_size": 63488 00:16:09.464 }, 00:16:09.464 { 00:16:09.464 "name": "BaseBdev2", 00:16:09.464 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:09.464 "is_configured": true, 00:16:09.464 "data_offset": 2048, 00:16:09.464 "data_size": 63488 00:16:09.464 }, 00:16:09.464 { 00:16:09.464 "name": "BaseBdev3", 00:16:09.464 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:09.464 "is_configured": true, 00:16:09.464 "data_offset": 2048, 00:16:09.464 "data_size": 63488 00:16:09.464 }, 00:16:09.464 { 00:16:09.464 "name": "BaseBdev4", 00:16:09.464 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:09.464 "is_configured": true, 00:16:09.464 "data_offset": 2048, 00:16:09.464 "data_size": 63488 00:16:09.464 } 00:16:09.464 ] 00:16:09.464 }' 00:16:09.464 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.464 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.464 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.464 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.464 02:30:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:10.841 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.841 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.841 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.841 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.841 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.841 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.841 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.841 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.841 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.841 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.841 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.841 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.841 "name": "raid_bdev1", 00:16:10.841 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:10.841 "strip_size_kb": 64, 00:16:10.841 "state": "online", 00:16:10.841 "raid_level": "raid5f", 00:16:10.841 "superblock": true, 00:16:10.841 "num_base_bdevs": 4, 00:16:10.841 "num_base_bdevs_discovered": 4, 00:16:10.841 "num_base_bdevs_operational": 4, 00:16:10.841 "process": { 00:16:10.841 "type": "rebuild", 00:16:10.841 "target": "spare", 00:16:10.841 "progress": { 00:16:10.841 "blocks": 42240, 00:16:10.841 "percent": 22 00:16:10.841 } 00:16:10.841 }, 00:16:10.841 "base_bdevs_list": [ 00:16:10.841 { 00:16:10.841 "name": "spare", 00:16:10.841 "uuid": "3d195646-b2d2-512b-ab70-b095a4e80a50", 00:16:10.841 "is_configured": true, 00:16:10.841 "data_offset": 2048, 00:16:10.841 "data_size": 63488 00:16:10.841 }, 00:16:10.841 { 00:16:10.841 "name": "BaseBdev2", 00:16:10.841 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:10.841 "is_configured": true, 00:16:10.841 "data_offset": 2048, 00:16:10.841 "data_size": 63488 00:16:10.841 }, 00:16:10.841 { 00:16:10.841 "name": "BaseBdev3", 00:16:10.841 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:10.841 "is_configured": true, 00:16:10.841 "data_offset": 2048, 00:16:10.841 "data_size": 63488 00:16:10.841 }, 00:16:10.841 { 00:16:10.841 "name": "BaseBdev4", 00:16:10.841 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:10.841 "is_configured": true, 00:16:10.841 "data_offset": 2048, 00:16:10.841 "data_size": 63488 00:16:10.841 } 00:16:10.841 ] 00:16:10.841 }' 00:16:10.841 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.841 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.841 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.841 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.841 02:30:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:11.779 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.779 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.779 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.779 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.779 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.779 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.779 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.779 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.779 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.779 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.779 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.779 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.779 "name": "raid_bdev1", 00:16:11.779 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:11.779 "strip_size_kb": 64, 00:16:11.779 "state": "online", 00:16:11.779 "raid_level": "raid5f", 00:16:11.779 "superblock": true, 00:16:11.779 "num_base_bdevs": 4, 00:16:11.779 "num_base_bdevs_discovered": 4, 00:16:11.779 "num_base_bdevs_operational": 4, 00:16:11.779 "process": { 00:16:11.779 "type": "rebuild", 00:16:11.779 "target": "spare", 00:16:11.779 "progress": { 00:16:11.779 "blocks": 65280, 00:16:11.779 "percent": 34 00:16:11.779 } 00:16:11.779 }, 00:16:11.779 "base_bdevs_list": [ 00:16:11.779 { 00:16:11.779 "name": "spare", 00:16:11.779 "uuid": "3d195646-b2d2-512b-ab70-b095a4e80a50", 00:16:11.779 "is_configured": true, 00:16:11.779 "data_offset": 2048, 00:16:11.779 "data_size": 63488 00:16:11.779 }, 00:16:11.779 { 00:16:11.779 "name": "BaseBdev2", 00:16:11.779 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:11.779 "is_configured": true, 00:16:11.779 "data_offset": 2048, 00:16:11.779 "data_size": 63488 00:16:11.779 }, 00:16:11.779 { 00:16:11.779 "name": "BaseBdev3", 00:16:11.779 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:11.779 "is_configured": true, 00:16:11.779 "data_offset": 2048, 00:16:11.779 "data_size": 63488 00:16:11.779 }, 00:16:11.779 { 00:16:11.779 "name": "BaseBdev4", 00:16:11.779 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:11.779 "is_configured": true, 00:16:11.779 "data_offset": 2048, 00:16:11.779 "data_size": 63488 00:16:11.779 } 00:16:11.779 ] 00:16:11.779 }' 00:16:11.779 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.779 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.779 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.779 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.779 02:30:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.789 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.789 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.789 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.789 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.789 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.789 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.789 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.789 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.789 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.789 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.789 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.048 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.048 "name": "raid_bdev1", 00:16:13.048 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:13.048 "strip_size_kb": 64, 00:16:13.048 "state": "online", 00:16:13.048 "raid_level": "raid5f", 00:16:13.048 "superblock": true, 00:16:13.048 "num_base_bdevs": 4, 00:16:13.048 "num_base_bdevs_discovered": 4, 00:16:13.048 "num_base_bdevs_operational": 4, 00:16:13.048 "process": { 00:16:13.048 "type": "rebuild", 00:16:13.048 "target": "spare", 00:16:13.048 "progress": { 00:16:13.048 "blocks": 86400, 00:16:13.048 "percent": 45 00:16:13.048 } 00:16:13.048 }, 00:16:13.048 "base_bdevs_list": [ 00:16:13.048 { 00:16:13.048 "name": "spare", 00:16:13.048 "uuid": "3d195646-b2d2-512b-ab70-b095a4e80a50", 00:16:13.048 "is_configured": true, 00:16:13.048 "data_offset": 2048, 00:16:13.048 "data_size": 63488 00:16:13.048 }, 00:16:13.048 { 00:16:13.048 "name": "BaseBdev2", 00:16:13.048 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:13.048 "is_configured": true, 00:16:13.048 "data_offset": 2048, 00:16:13.048 "data_size": 63488 00:16:13.048 }, 00:16:13.048 { 00:16:13.048 "name": "BaseBdev3", 00:16:13.048 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:13.048 "is_configured": true, 00:16:13.048 "data_offset": 2048, 00:16:13.048 "data_size": 63488 00:16:13.048 }, 00:16:13.048 { 00:16:13.048 "name": "BaseBdev4", 00:16:13.048 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:13.048 "is_configured": true, 00:16:13.048 "data_offset": 2048, 00:16:13.048 "data_size": 63488 00:16:13.048 } 00:16:13.048 ] 00:16:13.048 }' 00:16:13.048 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.048 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.048 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.048 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.048 02:30:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.986 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.986 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.986 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.986 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.986 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.986 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.986 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.986 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.986 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.986 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.986 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.986 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.986 "name": "raid_bdev1", 00:16:13.986 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:13.986 "strip_size_kb": 64, 00:16:13.986 "state": "online", 00:16:13.986 "raid_level": "raid5f", 00:16:13.986 "superblock": true, 00:16:13.986 "num_base_bdevs": 4, 00:16:13.986 "num_base_bdevs_discovered": 4, 00:16:13.986 "num_base_bdevs_operational": 4, 00:16:13.986 "process": { 00:16:13.986 "type": "rebuild", 00:16:13.986 "target": "spare", 00:16:13.986 "progress": { 00:16:13.986 "blocks": 109440, 00:16:13.986 "percent": 57 00:16:13.986 } 00:16:13.986 }, 00:16:13.986 "base_bdevs_list": [ 00:16:13.986 { 00:16:13.986 "name": "spare", 00:16:13.986 "uuid": "3d195646-b2d2-512b-ab70-b095a4e80a50", 00:16:13.986 "is_configured": true, 00:16:13.986 "data_offset": 2048, 00:16:13.986 "data_size": 63488 00:16:13.986 }, 00:16:13.986 { 00:16:13.986 "name": "BaseBdev2", 00:16:13.986 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:13.986 "is_configured": true, 00:16:13.986 "data_offset": 2048, 00:16:13.986 "data_size": 63488 00:16:13.986 }, 00:16:13.986 { 00:16:13.986 "name": "BaseBdev3", 00:16:13.986 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:13.986 "is_configured": true, 00:16:13.986 "data_offset": 2048, 00:16:13.986 "data_size": 63488 00:16:13.986 }, 00:16:13.986 { 00:16:13.986 "name": "BaseBdev4", 00:16:13.986 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:13.986 "is_configured": true, 00:16:13.986 "data_offset": 2048, 00:16:13.986 "data_size": 63488 00:16:13.986 } 00:16:13.986 ] 00:16:13.986 }' 00:16:13.986 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.245 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.245 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.245 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.245 02:30:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.182 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.182 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.182 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.182 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.182 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.182 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.182 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.182 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.182 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.182 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.182 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.182 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.182 "name": "raid_bdev1", 00:16:15.182 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:15.182 "strip_size_kb": 64, 00:16:15.182 "state": "online", 00:16:15.182 "raid_level": "raid5f", 00:16:15.182 "superblock": true, 00:16:15.182 "num_base_bdevs": 4, 00:16:15.182 "num_base_bdevs_discovered": 4, 00:16:15.182 "num_base_bdevs_operational": 4, 00:16:15.182 "process": { 00:16:15.182 "type": "rebuild", 00:16:15.182 "target": "spare", 00:16:15.182 "progress": { 00:16:15.182 "blocks": 130560, 00:16:15.182 "percent": 68 00:16:15.182 } 00:16:15.182 }, 00:16:15.182 "base_bdevs_list": [ 00:16:15.182 { 00:16:15.182 "name": "spare", 00:16:15.182 "uuid": "3d195646-b2d2-512b-ab70-b095a4e80a50", 00:16:15.182 "is_configured": true, 00:16:15.182 "data_offset": 2048, 00:16:15.182 "data_size": 63488 00:16:15.182 }, 00:16:15.182 { 00:16:15.182 "name": "BaseBdev2", 00:16:15.182 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:15.182 "is_configured": true, 00:16:15.182 "data_offset": 2048, 00:16:15.182 "data_size": 63488 00:16:15.182 }, 00:16:15.182 { 00:16:15.182 "name": "BaseBdev3", 00:16:15.182 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:15.182 "is_configured": true, 00:16:15.182 "data_offset": 2048, 00:16:15.182 "data_size": 63488 00:16:15.182 }, 00:16:15.182 { 00:16:15.182 "name": "BaseBdev4", 00:16:15.182 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:15.182 "is_configured": true, 00:16:15.182 "data_offset": 2048, 00:16:15.182 "data_size": 63488 00:16:15.182 } 00:16:15.182 ] 00:16:15.182 }' 00:16:15.182 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.182 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.182 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.442 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.442 02:30:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:16.379 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.379 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.379 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.379 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.379 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.379 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.379 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.379 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.379 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.379 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.379 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.379 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.379 "name": "raid_bdev1", 00:16:16.379 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:16.379 "strip_size_kb": 64, 00:16:16.379 "state": "online", 00:16:16.379 "raid_level": "raid5f", 00:16:16.379 "superblock": true, 00:16:16.379 "num_base_bdevs": 4, 00:16:16.379 "num_base_bdevs_discovered": 4, 00:16:16.379 "num_base_bdevs_operational": 4, 00:16:16.379 "process": { 00:16:16.379 "type": "rebuild", 00:16:16.379 "target": "spare", 00:16:16.379 "progress": { 00:16:16.379 "blocks": 153600, 00:16:16.379 "percent": 80 00:16:16.379 } 00:16:16.379 }, 00:16:16.379 "base_bdevs_list": [ 00:16:16.379 { 00:16:16.379 "name": "spare", 00:16:16.379 "uuid": "3d195646-b2d2-512b-ab70-b095a4e80a50", 00:16:16.379 "is_configured": true, 00:16:16.379 "data_offset": 2048, 00:16:16.379 "data_size": 63488 00:16:16.379 }, 00:16:16.379 { 00:16:16.379 "name": "BaseBdev2", 00:16:16.379 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:16.379 "is_configured": true, 00:16:16.379 "data_offset": 2048, 00:16:16.379 "data_size": 63488 00:16:16.379 }, 00:16:16.379 { 00:16:16.379 "name": "BaseBdev3", 00:16:16.379 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:16.379 "is_configured": true, 00:16:16.379 "data_offset": 2048, 00:16:16.379 "data_size": 63488 00:16:16.379 }, 00:16:16.379 { 00:16:16.379 "name": "BaseBdev4", 00:16:16.379 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:16.379 "is_configured": true, 00:16:16.379 "data_offset": 2048, 00:16:16.379 "data_size": 63488 00:16:16.379 } 00:16:16.379 ] 00:16:16.379 }' 00:16:16.379 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.379 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.379 02:30:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.379 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.379 02:30:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.757 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.757 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.757 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.757 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.757 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.757 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.757 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.757 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.757 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.757 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.757 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.757 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.757 "name": "raid_bdev1", 00:16:17.757 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:17.757 "strip_size_kb": 64, 00:16:17.757 "state": "online", 00:16:17.757 "raid_level": "raid5f", 00:16:17.757 "superblock": true, 00:16:17.757 "num_base_bdevs": 4, 00:16:17.757 "num_base_bdevs_discovered": 4, 00:16:17.757 "num_base_bdevs_operational": 4, 00:16:17.757 "process": { 00:16:17.757 "type": "rebuild", 00:16:17.757 "target": "spare", 00:16:17.757 "progress": { 00:16:17.757 "blocks": 174720, 00:16:17.757 "percent": 91 00:16:17.757 } 00:16:17.757 }, 00:16:17.757 "base_bdevs_list": [ 00:16:17.757 { 00:16:17.757 "name": "spare", 00:16:17.757 "uuid": "3d195646-b2d2-512b-ab70-b095a4e80a50", 00:16:17.757 "is_configured": true, 00:16:17.757 "data_offset": 2048, 00:16:17.757 "data_size": 63488 00:16:17.757 }, 00:16:17.757 { 00:16:17.757 "name": "BaseBdev2", 00:16:17.757 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:17.757 "is_configured": true, 00:16:17.757 "data_offset": 2048, 00:16:17.757 "data_size": 63488 00:16:17.757 }, 00:16:17.757 { 00:16:17.757 "name": "BaseBdev3", 00:16:17.757 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:17.757 "is_configured": true, 00:16:17.757 "data_offset": 2048, 00:16:17.757 "data_size": 63488 00:16:17.757 }, 00:16:17.757 { 00:16:17.757 "name": "BaseBdev4", 00:16:17.757 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:17.757 "is_configured": true, 00:16:17.757 "data_offset": 2048, 00:16:17.757 "data_size": 63488 00:16:17.757 } 00:16:17.757 ] 00:16:17.757 }' 00:16:17.757 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.757 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.757 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.757 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.757 02:30:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.325 [2024-10-13 02:30:36.908706] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:18.325 [2024-10-13 02:30:36.908825] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:18.325 [2024-10-13 02:30:36.908992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.585 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.585 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.585 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.585 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.585 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.585 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.585 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.585 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.585 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.585 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.585 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.585 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.585 "name": "raid_bdev1", 00:16:18.585 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:18.585 "strip_size_kb": 64, 00:16:18.585 "state": "online", 00:16:18.585 "raid_level": "raid5f", 00:16:18.585 "superblock": true, 00:16:18.585 "num_base_bdevs": 4, 00:16:18.585 "num_base_bdevs_discovered": 4, 00:16:18.585 "num_base_bdevs_operational": 4, 00:16:18.585 "base_bdevs_list": [ 00:16:18.585 { 00:16:18.585 "name": "spare", 00:16:18.585 "uuid": "3d195646-b2d2-512b-ab70-b095a4e80a50", 00:16:18.585 "is_configured": true, 00:16:18.585 "data_offset": 2048, 00:16:18.585 "data_size": 63488 00:16:18.585 }, 00:16:18.585 { 00:16:18.585 "name": "BaseBdev2", 00:16:18.585 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:18.585 "is_configured": true, 00:16:18.585 "data_offset": 2048, 00:16:18.585 "data_size": 63488 00:16:18.585 }, 00:16:18.585 { 00:16:18.585 "name": "BaseBdev3", 00:16:18.585 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:18.585 "is_configured": true, 00:16:18.585 "data_offset": 2048, 00:16:18.585 "data_size": 63488 00:16:18.585 }, 00:16:18.585 { 00:16:18.585 "name": "BaseBdev4", 00:16:18.585 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:18.585 "is_configured": true, 00:16:18.585 "data_offset": 2048, 00:16:18.585 "data_size": 63488 00:16:18.585 } 00:16:18.585 ] 00:16:18.585 }' 00:16:18.585 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.844 "name": "raid_bdev1", 00:16:18.844 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:18.844 "strip_size_kb": 64, 00:16:18.844 "state": "online", 00:16:18.844 "raid_level": "raid5f", 00:16:18.844 "superblock": true, 00:16:18.844 "num_base_bdevs": 4, 00:16:18.844 "num_base_bdevs_discovered": 4, 00:16:18.844 "num_base_bdevs_operational": 4, 00:16:18.844 "base_bdevs_list": [ 00:16:18.844 { 00:16:18.844 "name": "spare", 00:16:18.844 "uuid": "3d195646-b2d2-512b-ab70-b095a4e80a50", 00:16:18.844 "is_configured": true, 00:16:18.844 "data_offset": 2048, 00:16:18.844 "data_size": 63488 00:16:18.844 }, 00:16:18.844 { 00:16:18.844 "name": "BaseBdev2", 00:16:18.844 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:18.844 "is_configured": true, 00:16:18.844 "data_offset": 2048, 00:16:18.844 "data_size": 63488 00:16:18.844 }, 00:16:18.844 { 00:16:18.844 "name": "BaseBdev3", 00:16:18.844 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:18.844 "is_configured": true, 00:16:18.844 "data_offset": 2048, 00:16:18.844 "data_size": 63488 00:16:18.844 }, 00:16:18.844 { 00:16:18.844 "name": "BaseBdev4", 00:16:18.844 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:18.844 "is_configured": true, 00:16:18.844 "data_offset": 2048, 00:16:18.844 "data_size": 63488 00:16:18.844 } 00:16:18.844 ] 00:16:18.844 }' 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.844 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.104 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.104 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.104 "name": "raid_bdev1", 00:16:19.104 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:19.104 "strip_size_kb": 64, 00:16:19.104 "state": "online", 00:16:19.104 "raid_level": "raid5f", 00:16:19.104 "superblock": true, 00:16:19.104 "num_base_bdevs": 4, 00:16:19.104 "num_base_bdevs_discovered": 4, 00:16:19.104 "num_base_bdevs_operational": 4, 00:16:19.104 "base_bdevs_list": [ 00:16:19.104 { 00:16:19.104 "name": "spare", 00:16:19.104 "uuid": "3d195646-b2d2-512b-ab70-b095a4e80a50", 00:16:19.104 "is_configured": true, 00:16:19.104 "data_offset": 2048, 00:16:19.104 "data_size": 63488 00:16:19.104 }, 00:16:19.104 { 00:16:19.104 "name": "BaseBdev2", 00:16:19.104 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:19.104 "is_configured": true, 00:16:19.104 "data_offset": 2048, 00:16:19.104 "data_size": 63488 00:16:19.104 }, 00:16:19.104 { 00:16:19.104 "name": "BaseBdev3", 00:16:19.104 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:19.104 "is_configured": true, 00:16:19.104 "data_offset": 2048, 00:16:19.104 "data_size": 63488 00:16:19.104 }, 00:16:19.104 { 00:16:19.104 "name": "BaseBdev4", 00:16:19.104 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:19.104 "is_configured": true, 00:16:19.104 "data_offset": 2048, 00:16:19.104 "data_size": 63488 00:16:19.104 } 00:16:19.104 ] 00:16:19.104 }' 00:16:19.104 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.104 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.363 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:19.363 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.363 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.363 [2024-10-13 02:30:37.956735] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.363 [2024-10-13 02:30:37.956784] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.363 [2024-10-13 02:30:37.956915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.363 [2024-10-13 02:30:37.957021] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.363 [2024-10-13 02:30:37.957054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:19.363 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.363 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.363 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.363 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:19.363 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.363 02:30:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.363 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:19.363 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:19.363 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:19.363 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:19.363 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.363 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:19.363 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:19.363 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:19.363 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:19.363 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:19.363 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:19.363 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:19.363 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:19.622 /dev/nbd0 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.622 1+0 records in 00:16:19.622 1+0 records out 00:16:19.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407313 s, 10.1 MB/s 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:19.622 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:19.880 /dev/nbd1 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.880 1+0 records in 00:16:19.880 1+0 records out 00:16:19.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480733 s, 8.5 MB/s 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:19.880 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:20.140 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:20.140 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.140 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:20.140 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:20.140 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:20.140 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:20.140 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:20.398 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:20.398 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:20.398 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:20.398 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:20.398 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:20.398 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:20.398 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:20.398 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:20.398 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:20.399 02:30:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.658 [2024-10-13 02:30:39.116662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:20.658 [2024-10-13 02:30:39.117167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.658 [2024-10-13 02:30:39.117270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:20.658 [2024-10-13 02:30:39.117325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.658 [2024-10-13 02:30:39.119622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.658 [2024-10-13 02:30:39.119768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:20.658 [2024-10-13 02:30:39.119933] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:20.658 [2024-10-13 02:30:39.119979] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:20.658 [2024-10-13 02:30:39.120112] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:20.658 [2024-10-13 02:30:39.120223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:20.658 [2024-10-13 02:30:39.120296] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:20.658 spare 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.658 [2024-10-13 02:30:39.220222] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:16:20.658 [2024-10-13 02:30:39.220276] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:20.658 [2024-10-13 02:30:39.220621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045820 00:16:20.658 [2024-10-13 02:30:39.221186] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:16:20.658 [2024-10-13 02:30:39.221212] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:16:20.658 [2024-10-13 02:30:39.221405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.658 "name": "raid_bdev1", 00:16:20.658 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:20.658 "strip_size_kb": 64, 00:16:20.658 "state": "online", 00:16:20.658 "raid_level": "raid5f", 00:16:20.658 "superblock": true, 00:16:20.658 "num_base_bdevs": 4, 00:16:20.658 "num_base_bdevs_discovered": 4, 00:16:20.658 "num_base_bdevs_operational": 4, 00:16:20.658 "base_bdevs_list": [ 00:16:20.658 { 00:16:20.658 "name": "spare", 00:16:20.658 "uuid": "3d195646-b2d2-512b-ab70-b095a4e80a50", 00:16:20.658 "is_configured": true, 00:16:20.658 "data_offset": 2048, 00:16:20.658 "data_size": 63488 00:16:20.658 }, 00:16:20.658 { 00:16:20.658 "name": "BaseBdev2", 00:16:20.658 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:20.658 "is_configured": true, 00:16:20.658 "data_offset": 2048, 00:16:20.658 "data_size": 63488 00:16:20.658 }, 00:16:20.658 { 00:16:20.658 "name": "BaseBdev3", 00:16:20.658 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:20.658 "is_configured": true, 00:16:20.658 "data_offset": 2048, 00:16:20.658 "data_size": 63488 00:16:20.658 }, 00:16:20.658 { 00:16:20.658 "name": "BaseBdev4", 00:16:20.658 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:20.658 "is_configured": true, 00:16:20.658 "data_offset": 2048, 00:16:20.658 "data_size": 63488 00:16:20.658 } 00:16:20.658 ] 00:16:20.658 }' 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.658 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.227 "name": "raid_bdev1", 00:16:21.227 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:21.227 "strip_size_kb": 64, 00:16:21.227 "state": "online", 00:16:21.227 "raid_level": "raid5f", 00:16:21.227 "superblock": true, 00:16:21.227 "num_base_bdevs": 4, 00:16:21.227 "num_base_bdevs_discovered": 4, 00:16:21.227 "num_base_bdevs_operational": 4, 00:16:21.227 "base_bdevs_list": [ 00:16:21.227 { 00:16:21.227 "name": "spare", 00:16:21.227 "uuid": "3d195646-b2d2-512b-ab70-b095a4e80a50", 00:16:21.227 "is_configured": true, 00:16:21.227 "data_offset": 2048, 00:16:21.227 "data_size": 63488 00:16:21.227 }, 00:16:21.227 { 00:16:21.227 "name": "BaseBdev2", 00:16:21.227 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:21.227 "is_configured": true, 00:16:21.227 "data_offset": 2048, 00:16:21.227 "data_size": 63488 00:16:21.227 }, 00:16:21.227 { 00:16:21.227 "name": "BaseBdev3", 00:16:21.227 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:21.227 "is_configured": true, 00:16:21.227 "data_offset": 2048, 00:16:21.227 "data_size": 63488 00:16:21.227 }, 00:16:21.227 { 00:16:21.227 "name": "BaseBdev4", 00:16:21.227 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:21.227 "is_configured": true, 00:16:21.227 "data_offset": 2048, 00:16:21.227 "data_size": 63488 00:16:21.227 } 00:16:21.227 ] 00:16:21.227 }' 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.227 [2024-10-13 02:30:39.852385] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.227 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.227 "name": "raid_bdev1", 00:16:21.227 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:21.227 "strip_size_kb": 64, 00:16:21.227 "state": "online", 00:16:21.227 "raid_level": "raid5f", 00:16:21.227 "superblock": true, 00:16:21.227 "num_base_bdevs": 4, 00:16:21.227 "num_base_bdevs_discovered": 3, 00:16:21.227 "num_base_bdevs_operational": 3, 00:16:21.227 "base_bdevs_list": [ 00:16:21.227 { 00:16:21.227 "name": null, 00:16:21.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.227 "is_configured": false, 00:16:21.227 "data_offset": 0, 00:16:21.227 "data_size": 63488 00:16:21.227 }, 00:16:21.228 { 00:16:21.228 "name": "BaseBdev2", 00:16:21.228 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:21.228 "is_configured": true, 00:16:21.228 "data_offset": 2048, 00:16:21.228 "data_size": 63488 00:16:21.228 }, 00:16:21.228 { 00:16:21.228 "name": "BaseBdev3", 00:16:21.228 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:21.228 "is_configured": true, 00:16:21.228 "data_offset": 2048, 00:16:21.228 "data_size": 63488 00:16:21.228 }, 00:16:21.228 { 00:16:21.228 "name": "BaseBdev4", 00:16:21.228 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:21.228 "is_configured": true, 00:16:21.228 "data_offset": 2048, 00:16:21.228 "data_size": 63488 00:16:21.228 } 00:16:21.228 ] 00:16:21.228 }' 00:16:21.228 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.228 02:30:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.796 02:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:21.796 02:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.796 02:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.796 [2024-10-13 02:30:40.303653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:21.796 [2024-10-13 02:30:40.303890] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:21.796 [2024-10-13 02:30:40.303912] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:21.796 [2024-10-13 02:30:40.304309] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:21.796 [2024-10-13 02:30:40.307533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000458f0 00:16:21.796 [2024-10-13 02:30:40.309769] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:21.796 02:30:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.796 02:30:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:22.733 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.733 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.733 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.733 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.733 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.733 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.733 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.733 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.733 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.733 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.733 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.733 "name": "raid_bdev1", 00:16:22.733 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:22.733 "strip_size_kb": 64, 00:16:22.733 "state": "online", 00:16:22.733 "raid_level": "raid5f", 00:16:22.733 "superblock": true, 00:16:22.733 "num_base_bdevs": 4, 00:16:22.733 "num_base_bdevs_discovered": 4, 00:16:22.733 "num_base_bdevs_operational": 4, 00:16:22.733 "process": { 00:16:22.733 "type": "rebuild", 00:16:22.733 "target": "spare", 00:16:22.733 "progress": { 00:16:22.733 "blocks": 19200, 00:16:22.733 "percent": 10 00:16:22.733 } 00:16:22.733 }, 00:16:22.733 "base_bdevs_list": [ 00:16:22.733 { 00:16:22.733 "name": "spare", 00:16:22.733 "uuid": "3d195646-b2d2-512b-ab70-b095a4e80a50", 00:16:22.733 "is_configured": true, 00:16:22.733 "data_offset": 2048, 00:16:22.733 "data_size": 63488 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "name": "BaseBdev2", 00:16:22.733 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:22.733 "is_configured": true, 00:16:22.733 "data_offset": 2048, 00:16:22.733 "data_size": 63488 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "name": "BaseBdev3", 00:16:22.733 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:22.733 "is_configured": true, 00:16:22.733 "data_offset": 2048, 00:16:22.733 "data_size": 63488 00:16:22.733 }, 00:16:22.733 { 00:16:22.733 "name": "BaseBdev4", 00:16:22.733 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:22.733 "is_configured": true, 00:16:22.733 "data_offset": 2048, 00:16:22.733 "data_size": 63488 00:16:22.733 } 00:16:22.733 ] 00:16:22.733 }' 00:16:22.733 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.993 [2024-10-13 02:30:41.482163] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.993 [2024-10-13 02:30:41.517980] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:22.993 [2024-10-13 02:30:41.518060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.993 [2024-10-13 02:30:41.518081] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.993 [2024-10-13 02:30:41.518088] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.993 "name": "raid_bdev1", 00:16:22.993 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:22.993 "strip_size_kb": 64, 00:16:22.993 "state": "online", 00:16:22.993 "raid_level": "raid5f", 00:16:22.993 "superblock": true, 00:16:22.993 "num_base_bdevs": 4, 00:16:22.993 "num_base_bdevs_discovered": 3, 00:16:22.993 "num_base_bdevs_operational": 3, 00:16:22.993 "base_bdevs_list": [ 00:16:22.993 { 00:16:22.993 "name": null, 00:16:22.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.993 "is_configured": false, 00:16:22.993 "data_offset": 0, 00:16:22.993 "data_size": 63488 00:16:22.993 }, 00:16:22.993 { 00:16:22.993 "name": "BaseBdev2", 00:16:22.993 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:22.993 "is_configured": true, 00:16:22.993 "data_offset": 2048, 00:16:22.993 "data_size": 63488 00:16:22.993 }, 00:16:22.993 { 00:16:22.993 "name": "BaseBdev3", 00:16:22.993 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:22.993 "is_configured": true, 00:16:22.993 "data_offset": 2048, 00:16:22.993 "data_size": 63488 00:16:22.993 }, 00:16:22.993 { 00:16:22.993 "name": "BaseBdev4", 00:16:22.993 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:22.993 "is_configured": true, 00:16:22.993 "data_offset": 2048, 00:16:22.993 "data_size": 63488 00:16:22.993 } 00:16:22.993 ] 00:16:22.993 }' 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.993 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.561 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:23.561 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.561 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.561 [2024-10-13 02:30:41.962498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:23.561 [2024-10-13 02:30:41.962582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.561 [2024-10-13 02:30:41.962611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:23.561 [2024-10-13 02:30:41.962620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.561 [2024-10-13 02:30:41.963088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.561 [2024-10-13 02:30:41.963108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:23.561 [2024-10-13 02:30:41.963204] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:23.561 [2024-10-13 02:30:41.963218] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:23.561 [2024-10-13 02:30:41.963233] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:23.561 [2024-10-13 02:30:41.963256] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:23.561 [2024-10-13 02:30:41.966599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000459c0 00:16:23.561 spare 00:16:23.561 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.561 02:30:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:23.561 [2024-10-13 02:30:41.968931] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:24.498 02:30:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.498 02:30:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.498 02:30:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.498 02:30:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.498 02:30:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.498 02:30:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.498 02:30:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.498 02:30:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.498 02:30:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.499 02:30:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.499 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.499 "name": "raid_bdev1", 00:16:24.499 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:24.499 "strip_size_kb": 64, 00:16:24.499 "state": "online", 00:16:24.499 "raid_level": "raid5f", 00:16:24.499 "superblock": true, 00:16:24.499 "num_base_bdevs": 4, 00:16:24.499 "num_base_bdevs_discovered": 4, 00:16:24.499 "num_base_bdevs_operational": 4, 00:16:24.499 "process": { 00:16:24.499 "type": "rebuild", 00:16:24.499 "target": "spare", 00:16:24.499 "progress": { 00:16:24.499 "blocks": 19200, 00:16:24.499 "percent": 10 00:16:24.499 } 00:16:24.499 }, 00:16:24.499 "base_bdevs_list": [ 00:16:24.499 { 00:16:24.499 "name": "spare", 00:16:24.499 "uuid": "3d195646-b2d2-512b-ab70-b095a4e80a50", 00:16:24.499 "is_configured": true, 00:16:24.499 "data_offset": 2048, 00:16:24.499 "data_size": 63488 00:16:24.499 }, 00:16:24.499 { 00:16:24.499 "name": "BaseBdev2", 00:16:24.499 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:24.499 "is_configured": true, 00:16:24.499 "data_offset": 2048, 00:16:24.499 "data_size": 63488 00:16:24.499 }, 00:16:24.499 { 00:16:24.499 "name": "BaseBdev3", 00:16:24.499 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:24.499 "is_configured": true, 00:16:24.499 "data_offset": 2048, 00:16:24.499 "data_size": 63488 00:16:24.499 }, 00:16:24.499 { 00:16:24.499 "name": "BaseBdev4", 00:16:24.499 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:24.499 "is_configured": true, 00:16:24.499 "data_offset": 2048, 00:16:24.499 "data_size": 63488 00:16:24.499 } 00:16:24.499 ] 00:16:24.499 }' 00:16:24.499 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.499 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.499 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.499 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.499 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:24.499 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.499 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.499 [2024-10-13 02:30:43.141198] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:24.499 [2024-10-13 02:30:43.177072] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:24.499 [2024-10-13 02:30:43.177160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.499 [2024-10-13 02:30:43.177178] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:24.499 [2024-10-13 02:30:43.177187] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.758 "name": "raid_bdev1", 00:16:24.758 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:24.758 "strip_size_kb": 64, 00:16:24.758 "state": "online", 00:16:24.758 "raid_level": "raid5f", 00:16:24.758 "superblock": true, 00:16:24.758 "num_base_bdevs": 4, 00:16:24.758 "num_base_bdevs_discovered": 3, 00:16:24.758 "num_base_bdevs_operational": 3, 00:16:24.758 "base_bdevs_list": [ 00:16:24.758 { 00:16:24.758 "name": null, 00:16:24.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.758 "is_configured": false, 00:16:24.758 "data_offset": 0, 00:16:24.758 "data_size": 63488 00:16:24.758 }, 00:16:24.758 { 00:16:24.758 "name": "BaseBdev2", 00:16:24.758 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:24.758 "is_configured": true, 00:16:24.758 "data_offset": 2048, 00:16:24.758 "data_size": 63488 00:16:24.758 }, 00:16:24.758 { 00:16:24.758 "name": "BaseBdev3", 00:16:24.758 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:24.758 "is_configured": true, 00:16:24.758 "data_offset": 2048, 00:16:24.758 "data_size": 63488 00:16:24.758 }, 00:16:24.758 { 00:16:24.758 "name": "BaseBdev4", 00:16:24.758 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:24.758 "is_configured": true, 00:16:24.758 "data_offset": 2048, 00:16:24.758 "data_size": 63488 00:16:24.758 } 00:16:24.758 ] 00:16:24.758 }' 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.758 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.018 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.018 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.018 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.018 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.018 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.018 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.018 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.018 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.018 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.018 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.018 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.018 "name": "raid_bdev1", 00:16:25.018 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:25.018 "strip_size_kb": 64, 00:16:25.018 "state": "online", 00:16:25.018 "raid_level": "raid5f", 00:16:25.018 "superblock": true, 00:16:25.018 "num_base_bdevs": 4, 00:16:25.018 "num_base_bdevs_discovered": 3, 00:16:25.018 "num_base_bdevs_operational": 3, 00:16:25.018 "base_bdevs_list": [ 00:16:25.018 { 00:16:25.018 "name": null, 00:16:25.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.018 "is_configured": false, 00:16:25.018 "data_offset": 0, 00:16:25.018 "data_size": 63488 00:16:25.018 }, 00:16:25.018 { 00:16:25.018 "name": "BaseBdev2", 00:16:25.018 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:25.018 "is_configured": true, 00:16:25.018 "data_offset": 2048, 00:16:25.018 "data_size": 63488 00:16:25.018 }, 00:16:25.018 { 00:16:25.018 "name": "BaseBdev3", 00:16:25.018 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:25.018 "is_configured": true, 00:16:25.018 "data_offset": 2048, 00:16:25.018 "data_size": 63488 00:16:25.018 }, 00:16:25.018 { 00:16:25.018 "name": "BaseBdev4", 00:16:25.018 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:25.018 "is_configured": true, 00:16:25.018 "data_offset": 2048, 00:16:25.018 "data_size": 63488 00:16:25.018 } 00:16:25.018 ] 00:16:25.018 }' 00:16:25.018 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.278 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.278 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.278 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:25.278 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:25.278 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.278 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.278 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.278 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:25.278 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.278 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.278 [2024-10-13 02:30:43.797179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:25.278 [2024-10-13 02:30:43.797250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.278 [2024-10-13 02:30:43.797288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:25.278 [2024-10-13 02:30:43.797302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.278 [2024-10-13 02:30:43.797712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.278 [2024-10-13 02:30:43.797733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:25.278 [2024-10-13 02:30:43.797807] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:25.278 [2024-10-13 02:30:43.797826] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:25.278 [2024-10-13 02:30:43.797844] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:25.278 [2024-10-13 02:30:43.797858] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:25.278 BaseBdev1 00:16:25.278 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.278 02:30:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:26.250 02:30:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:26.250 02:30:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.250 02:30:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.250 02:30:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.250 02:30:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.250 02:30:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.250 02:30:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.250 02:30:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.250 02:30:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.250 02:30:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.250 02:30:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.250 02:30:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.250 02:30:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.250 02:30:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.250 02:30:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.250 02:30:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.250 "name": "raid_bdev1", 00:16:26.250 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:26.250 "strip_size_kb": 64, 00:16:26.250 "state": "online", 00:16:26.250 "raid_level": "raid5f", 00:16:26.250 "superblock": true, 00:16:26.250 "num_base_bdevs": 4, 00:16:26.250 "num_base_bdevs_discovered": 3, 00:16:26.250 "num_base_bdevs_operational": 3, 00:16:26.250 "base_bdevs_list": [ 00:16:26.250 { 00:16:26.250 "name": null, 00:16:26.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.250 "is_configured": false, 00:16:26.250 "data_offset": 0, 00:16:26.250 "data_size": 63488 00:16:26.250 }, 00:16:26.250 { 00:16:26.250 "name": "BaseBdev2", 00:16:26.250 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:26.250 "is_configured": true, 00:16:26.250 "data_offset": 2048, 00:16:26.250 "data_size": 63488 00:16:26.250 }, 00:16:26.250 { 00:16:26.250 "name": "BaseBdev3", 00:16:26.250 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:26.250 "is_configured": true, 00:16:26.250 "data_offset": 2048, 00:16:26.250 "data_size": 63488 00:16:26.250 }, 00:16:26.250 { 00:16:26.250 "name": "BaseBdev4", 00:16:26.250 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:26.250 "is_configured": true, 00:16:26.250 "data_offset": 2048, 00:16:26.250 "data_size": 63488 00:16:26.250 } 00:16:26.250 ] 00:16:26.250 }' 00:16:26.250 02:30:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.250 02:30:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.819 "name": "raid_bdev1", 00:16:26.819 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:26.819 "strip_size_kb": 64, 00:16:26.819 "state": "online", 00:16:26.819 "raid_level": "raid5f", 00:16:26.819 "superblock": true, 00:16:26.819 "num_base_bdevs": 4, 00:16:26.819 "num_base_bdevs_discovered": 3, 00:16:26.819 "num_base_bdevs_operational": 3, 00:16:26.819 "base_bdevs_list": [ 00:16:26.819 { 00:16:26.819 "name": null, 00:16:26.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.819 "is_configured": false, 00:16:26.819 "data_offset": 0, 00:16:26.819 "data_size": 63488 00:16:26.819 }, 00:16:26.819 { 00:16:26.819 "name": "BaseBdev2", 00:16:26.819 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:26.819 "is_configured": true, 00:16:26.819 "data_offset": 2048, 00:16:26.819 "data_size": 63488 00:16:26.819 }, 00:16:26.819 { 00:16:26.819 "name": "BaseBdev3", 00:16:26.819 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:26.819 "is_configured": true, 00:16:26.819 "data_offset": 2048, 00:16:26.819 "data_size": 63488 00:16:26.819 }, 00:16:26.819 { 00:16:26.819 "name": "BaseBdev4", 00:16:26.819 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:26.819 "is_configured": true, 00:16:26.819 "data_offset": 2048, 00:16:26.819 "data_size": 63488 00:16:26.819 } 00:16:26.819 ] 00:16:26.819 }' 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.819 [2024-10-13 02:30:45.414543] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.819 [2024-10-13 02:30:45.414724] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:26.819 [2024-10-13 02:30:45.414739] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:26.819 request: 00:16:26.819 { 00:16:26.819 "base_bdev": "BaseBdev1", 00:16:26.819 "raid_bdev": "raid_bdev1", 00:16:26.819 "method": "bdev_raid_add_base_bdev", 00:16:26.819 "req_id": 1 00:16:26.819 } 00:16:26.819 Got JSON-RPC error response 00:16:26.819 response: 00:16:26.819 { 00:16:26.819 "code": -22, 00:16:26.819 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:26.819 } 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:26.819 02:30:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:27.757 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:27.757 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.757 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.757 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.757 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.757 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.757 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.757 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.757 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.757 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.757 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.757 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.757 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.016 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.016 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.016 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.016 "name": "raid_bdev1", 00:16:28.016 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:28.016 "strip_size_kb": 64, 00:16:28.016 "state": "online", 00:16:28.016 "raid_level": "raid5f", 00:16:28.016 "superblock": true, 00:16:28.016 "num_base_bdevs": 4, 00:16:28.016 "num_base_bdevs_discovered": 3, 00:16:28.016 "num_base_bdevs_operational": 3, 00:16:28.016 "base_bdevs_list": [ 00:16:28.016 { 00:16:28.016 "name": null, 00:16:28.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.016 "is_configured": false, 00:16:28.016 "data_offset": 0, 00:16:28.016 "data_size": 63488 00:16:28.016 }, 00:16:28.016 { 00:16:28.016 "name": "BaseBdev2", 00:16:28.016 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:28.016 "is_configured": true, 00:16:28.016 "data_offset": 2048, 00:16:28.016 "data_size": 63488 00:16:28.016 }, 00:16:28.016 { 00:16:28.016 "name": "BaseBdev3", 00:16:28.016 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:28.016 "is_configured": true, 00:16:28.016 "data_offset": 2048, 00:16:28.016 "data_size": 63488 00:16:28.016 }, 00:16:28.016 { 00:16:28.016 "name": "BaseBdev4", 00:16:28.016 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:28.016 "is_configured": true, 00:16:28.016 "data_offset": 2048, 00:16:28.016 "data_size": 63488 00:16:28.016 } 00:16:28.016 ] 00:16:28.016 }' 00:16:28.016 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.016 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.275 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:28.275 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.275 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:28.275 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:28.275 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.275 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.275 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.275 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.275 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.275 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.534 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.534 "name": "raid_bdev1", 00:16:28.534 "uuid": "29da7ca1-b820-4215-88b0-1d68f0be5a66", 00:16:28.534 "strip_size_kb": 64, 00:16:28.534 "state": "online", 00:16:28.534 "raid_level": "raid5f", 00:16:28.534 "superblock": true, 00:16:28.534 "num_base_bdevs": 4, 00:16:28.534 "num_base_bdevs_discovered": 3, 00:16:28.534 "num_base_bdevs_operational": 3, 00:16:28.534 "base_bdevs_list": [ 00:16:28.534 { 00:16:28.534 "name": null, 00:16:28.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.535 "is_configured": false, 00:16:28.535 "data_offset": 0, 00:16:28.535 "data_size": 63488 00:16:28.535 }, 00:16:28.535 { 00:16:28.535 "name": "BaseBdev2", 00:16:28.535 "uuid": "01548888-a8d1-5a36-89c4-1d2b7a9374e6", 00:16:28.535 "is_configured": true, 00:16:28.535 "data_offset": 2048, 00:16:28.535 "data_size": 63488 00:16:28.535 }, 00:16:28.535 { 00:16:28.535 "name": "BaseBdev3", 00:16:28.535 "uuid": "a1c192ff-edf5-5a69-b42f-b1f0d18d1a09", 00:16:28.535 "is_configured": true, 00:16:28.535 "data_offset": 2048, 00:16:28.535 "data_size": 63488 00:16:28.535 }, 00:16:28.535 { 00:16:28.535 "name": "BaseBdev4", 00:16:28.535 "uuid": "8e2ca2df-1dd2-5431-80ed-e16d48db350b", 00:16:28.535 "is_configured": true, 00:16:28.535 "data_offset": 2048, 00:16:28.535 "data_size": 63488 00:16:28.535 } 00:16:28.535 ] 00:16:28.535 }' 00:16:28.535 02:30:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.535 02:30:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:28.535 02:30:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.535 02:30:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:28.535 02:30:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95458 00:16:28.535 02:30:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 95458 ']' 00:16:28.535 02:30:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 95458 00:16:28.535 02:30:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:28.535 02:30:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:28.535 02:30:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95458 00:16:28.535 killing process with pid 95458 00:16:28.535 Received shutdown signal, test time was about 60.000000 seconds 00:16:28.535 00:16:28.535 Latency(us) 00:16:28.535 [2024-10-13T02:30:47.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.535 [2024-10-13T02:30:47.219Z] =================================================================================================================== 00:16:28.535 [2024-10-13T02:30:47.219Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:28.535 02:30:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:28.535 02:30:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:28.535 02:30:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95458' 00:16:28.535 02:30:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 95458 00:16:28.535 [2024-10-13 02:30:47.082962] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:28.535 [2024-10-13 02:30:47.083105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.535 02:30:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 95458 00:16:28.535 [2024-10-13 02:30:47.083183] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.535 [2024-10-13 02:30:47.083194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:16:28.535 [2024-10-13 02:30:47.135801] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.794 02:30:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:28.794 00:16:28.794 real 0m25.506s 00:16:28.794 user 0m32.502s 00:16:28.794 sys 0m3.215s 00:16:28.794 02:30:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:28.794 02:30:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.794 ************************************ 00:16:28.794 END TEST raid5f_rebuild_test_sb 00:16:28.794 ************************************ 00:16:28.794 02:30:47 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:16:28.794 02:30:47 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:28.794 02:30:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:28.794 02:30:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:28.794 02:30:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:28.794 ************************************ 00:16:28.794 START TEST raid_state_function_test_sb_4k 00:16:28.794 ************************************ 00:16:28.794 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:28.794 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:28.794 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:28.794 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:28.794 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:28.794 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:28.794 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.794 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:28.794 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.794 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.794 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:28.794 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.794 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96252 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96252' 00:16:28.795 Process raid pid: 96252 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96252 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96252 ']' 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:28.795 02:30:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:29.054 [2024-10-13 02:30:47.537371] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:29.054 [2024-10-13 02:30:47.537584] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.054 [2024-10-13 02:30:47.685944] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.321 [2024-10-13 02:30:47.738470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.321 [2024-10-13 02:30:47.781164] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.321 [2024-10-13 02:30:47.781297] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.890 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:29.891 [2024-10-13 02:30:48.395148] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.891 [2024-10-13 02:30:48.395310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.891 [2024-10-13 02:30:48.395327] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.891 [2024-10-13 02:30:48.395338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.891 "name": "Existed_Raid", 00:16:29.891 "uuid": "61ee9ad4-4849-4858-a937-b201afb586af", 00:16:29.891 "strip_size_kb": 0, 00:16:29.891 "state": "configuring", 00:16:29.891 "raid_level": "raid1", 00:16:29.891 "superblock": true, 00:16:29.891 "num_base_bdevs": 2, 00:16:29.891 "num_base_bdevs_discovered": 0, 00:16:29.891 "num_base_bdevs_operational": 2, 00:16:29.891 "base_bdevs_list": [ 00:16:29.891 { 00:16:29.891 "name": "BaseBdev1", 00:16:29.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.891 "is_configured": false, 00:16:29.891 "data_offset": 0, 00:16:29.891 "data_size": 0 00:16:29.891 }, 00:16:29.891 { 00:16:29.891 "name": "BaseBdev2", 00:16:29.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.891 "is_configured": false, 00:16:29.891 "data_offset": 0, 00:16:29.891 "data_size": 0 00:16:29.891 } 00:16:29.891 ] 00:16:29.891 }' 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.891 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.459 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.459 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.459 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.459 [2024-10-13 02:30:48.850223] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.459 [2024-10-13 02:30:48.850379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:16:30.459 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.459 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:30.459 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.459 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.459 [2024-10-13 02:30:48.862228] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:30.459 [2024-10-13 02:30:48.862384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:30.459 [2024-10-13 02:30:48.862428] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.459 [2024-10-13 02:30:48.862453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.459 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.459 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:16:30.459 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.460 [2024-10-13 02:30:48.883154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.460 BaseBdev1 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.460 [ 00:16:30.460 { 00:16:30.460 "name": "BaseBdev1", 00:16:30.460 "aliases": [ 00:16:30.460 "e5dada7d-25b0-4922-8e8d-fcb6aa9b4d8b" 00:16:30.460 ], 00:16:30.460 "product_name": "Malloc disk", 00:16:30.460 "block_size": 4096, 00:16:30.460 "num_blocks": 8192, 00:16:30.460 "uuid": "e5dada7d-25b0-4922-8e8d-fcb6aa9b4d8b", 00:16:30.460 "assigned_rate_limits": { 00:16:30.460 "rw_ios_per_sec": 0, 00:16:30.460 "rw_mbytes_per_sec": 0, 00:16:30.460 "r_mbytes_per_sec": 0, 00:16:30.460 "w_mbytes_per_sec": 0 00:16:30.460 }, 00:16:30.460 "claimed": true, 00:16:30.460 "claim_type": "exclusive_write", 00:16:30.460 "zoned": false, 00:16:30.460 "supported_io_types": { 00:16:30.460 "read": true, 00:16:30.460 "write": true, 00:16:30.460 "unmap": true, 00:16:30.460 "flush": true, 00:16:30.460 "reset": true, 00:16:30.460 "nvme_admin": false, 00:16:30.460 "nvme_io": false, 00:16:30.460 "nvme_io_md": false, 00:16:30.460 "write_zeroes": true, 00:16:30.460 "zcopy": true, 00:16:30.460 "get_zone_info": false, 00:16:30.460 "zone_management": false, 00:16:30.460 "zone_append": false, 00:16:30.460 "compare": false, 00:16:30.460 "compare_and_write": false, 00:16:30.460 "abort": true, 00:16:30.460 "seek_hole": false, 00:16:30.460 "seek_data": false, 00:16:30.460 "copy": true, 00:16:30.460 "nvme_iov_md": false 00:16:30.460 }, 00:16:30.460 "memory_domains": [ 00:16:30.460 { 00:16:30.460 "dma_device_id": "system", 00:16:30.460 "dma_device_type": 1 00:16:30.460 }, 00:16:30.460 { 00:16:30.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.460 "dma_device_type": 2 00:16:30.460 } 00:16:30.460 ], 00:16:30.460 "driver_specific": {} 00:16:30.460 } 00:16:30.460 ] 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.460 "name": "Existed_Raid", 00:16:30.460 "uuid": "8d34eb15-7a69-42d3-9d9d-44d302d65b80", 00:16:30.460 "strip_size_kb": 0, 00:16:30.460 "state": "configuring", 00:16:30.460 "raid_level": "raid1", 00:16:30.460 "superblock": true, 00:16:30.460 "num_base_bdevs": 2, 00:16:30.460 "num_base_bdevs_discovered": 1, 00:16:30.460 "num_base_bdevs_operational": 2, 00:16:30.460 "base_bdevs_list": [ 00:16:30.460 { 00:16:30.460 "name": "BaseBdev1", 00:16:30.460 "uuid": "e5dada7d-25b0-4922-8e8d-fcb6aa9b4d8b", 00:16:30.460 "is_configured": true, 00:16:30.460 "data_offset": 256, 00:16:30.460 "data_size": 7936 00:16:30.460 }, 00:16:30.460 { 00:16:30.460 "name": "BaseBdev2", 00:16:30.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.460 "is_configured": false, 00:16:30.460 "data_offset": 0, 00:16:30.460 "data_size": 0 00:16:30.460 } 00:16:30.460 ] 00:16:30.460 }' 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.460 02:30:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.029 [2024-10-13 02:30:49.406356] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:31.029 [2024-10-13 02:30:49.406517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.029 [2024-10-13 02:30:49.418412] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.029 [2024-10-13 02:30:49.420584] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.029 [2024-10-13 02:30:49.420684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.029 "name": "Existed_Raid", 00:16:31.029 "uuid": "642a61d2-7667-47dc-a6a7-4407fb35df55", 00:16:31.029 "strip_size_kb": 0, 00:16:31.029 "state": "configuring", 00:16:31.029 "raid_level": "raid1", 00:16:31.029 "superblock": true, 00:16:31.029 "num_base_bdevs": 2, 00:16:31.029 "num_base_bdevs_discovered": 1, 00:16:31.029 "num_base_bdevs_operational": 2, 00:16:31.029 "base_bdevs_list": [ 00:16:31.029 { 00:16:31.029 "name": "BaseBdev1", 00:16:31.029 "uuid": "e5dada7d-25b0-4922-8e8d-fcb6aa9b4d8b", 00:16:31.029 "is_configured": true, 00:16:31.029 "data_offset": 256, 00:16:31.029 "data_size": 7936 00:16:31.029 }, 00:16:31.029 { 00:16:31.029 "name": "BaseBdev2", 00:16:31.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.029 "is_configured": false, 00:16:31.029 "data_offset": 0, 00:16:31.029 "data_size": 0 00:16:31.029 } 00:16:31.029 ] 00:16:31.029 }' 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.029 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.288 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:16:31.288 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.288 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.288 [2024-10-13 02:30:49.849093] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.288 [2024-10-13 02:30:49.849447] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:31.288 [2024-10-13 02:30:49.849510] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:31.288 [2024-10-13 02:30:49.849892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:31.289 BaseBdev2 00:16:31.289 [2024-10-13 02:30:49.850125] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:31.289 [2024-10-13 02:30:49.850152] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:16:31.289 [2024-10-13 02:30:49.850300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.289 [ 00:16:31.289 { 00:16:31.289 "name": "BaseBdev2", 00:16:31.289 "aliases": [ 00:16:31.289 "b2ef0833-944e-47a9-9f85-e1abe5931772" 00:16:31.289 ], 00:16:31.289 "product_name": "Malloc disk", 00:16:31.289 "block_size": 4096, 00:16:31.289 "num_blocks": 8192, 00:16:31.289 "uuid": "b2ef0833-944e-47a9-9f85-e1abe5931772", 00:16:31.289 "assigned_rate_limits": { 00:16:31.289 "rw_ios_per_sec": 0, 00:16:31.289 "rw_mbytes_per_sec": 0, 00:16:31.289 "r_mbytes_per_sec": 0, 00:16:31.289 "w_mbytes_per_sec": 0 00:16:31.289 }, 00:16:31.289 "claimed": true, 00:16:31.289 "claim_type": "exclusive_write", 00:16:31.289 "zoned": false, 00:16:31.289 "supported_io_types": { 00:16:31.289 "read": true, 00:16:31.289 "write": true, 00:16:31.289 "unmap": true, 00:16:31.289 "flush": true, 00:16:31.289 "reset": true, 00:16:31.289 "nvme_admin": false, 00:16:31.289 "nvme_io": false, 00:16:31.289 "nvme_io_md": false, 00:16:31.289 "write_zeroes": true, 00:16:31.289 "zcopy": true, 00:16:31.289 "get_zone_info": false, 00:16:31.289 "zone_management": false, 00:16:31.289 "zone_append": false, 00:16:31.289 "compare": false, 00:16:31.289 "compare_and_write": false, 00:16:31.289 "abort": true, 00:16:31.289 "seek_hole": false, 00:16:31.289 "seek_data": false, 00:16:31.289 "copy": true, 00:16:31.289 "nvme_iov_md": false 00:16:31.289 }, 00:16:31.289 "memory_domains": [ 00:16:31.289 { 00:16:31.289 "dma_device_id": "system", 00:16:31.289 "dma_device_type": 1 00:16:31.289 }, 00:16:31.289 { 00:16:31.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.289 "dma_device_type": 2 00:16:31.289 } 00:16:31.289 ], 00:16:31.289 "driver_specific": {} 00:16:31.289 } 00:16:31.289 ] 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.289 "name": "Existed_Raid", 00:16:31.289 "uuid": "642a61d2-7667-47dc-a6a7-4407fb35df55", 00:16:31.289 "strip_size_kb": 0, 00:16:31.289 "state": "online", 00:16:31.289 "raid_level": "raid1", 00:16:31.289 "superblock": true, 00:16:31.289 "num_base_bdevs": 2, 00:16:31.289 "num_base_bdevs_discovered": 2, 00:16:31.289 "num_base_bdevs_operational": 2, 00:16:31.289 "base_bdevs_list": [ 00:16:31.289 { 00:16:31.289 "name": "BaseBdev1", 00:16:31.289 "uuid": "e5dada7d-25b0-4922-8e8d-fcb6aa9b4d8b", 00:16:31.289 "is_configured": true, 00:16:31.289 "data_offset": 256, 00:16:31.289 "data_size": 7936 00:16:31.289 }, 00:16:31.289 { 00:16:31.289 "name": "BaseBdev2", 00:16:31.289 "uuid": "b2ef0833-944e-47a9-9f85-e1abe5931772", 00:16:31.289 "is_configured": true, 00:16:31.289 "data_offset": 256, 00:16:31.289 "data_size": 7936 00:16:31.289 } 00:16:31.289 ] 00:16:31.289 }' 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.289 02:30:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.857 [2024-10-13 02:30:50.392536] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:31.857 "name": "Existed_Raid", 00:16:31.857 "aliases": [ 00:16:31.857 "642a61d2-7667-47dc-a6a7-4407fb35df55" 00:16:31.857 ], 00:16:31.857 "product_name": "Raid Volume", 00:16:31.857 "block_size": 4096, 00:16:31.857 "num_blocks": 7936, 00:16:31.857 "uuid": "642a61d2-7667-47dc-a6a7-4407fb35df55", 00:16:31.857 "assigned_rate_limits": { 00:16:31.857 "rw_ios_per_sec": 0, 00:16:31.857 "rw_mbytes_per_sec": 0, 00:16:31.857 "r_mbytes_per_sec": 0, 00:16:31.857 "w_mbytes_per_sec": 0 00:16:31.857 }, 00:16:31.857 "claimed": false, 00:16:31.857 "zoned": false, 00:16:31.857 "supported_io_types": { 00:16:31.857 "read": true, 00:16:31.857 "write": true, 00:16:31.857 "unmap": false, 00:16:31.857 "flush": false, 00:16:31.857 "reset": true, 00:16:31.857 "nvme_admin": false, 00:16:31.857 "nvme_io": false, 00:16:31.857 "nvme_io_md": false, 00:16:31.857 "write_zeroes": true, 00:16:31.857 "zcopy": false, 00:16:31.857 "get_zone_info": false, 00:16:31.857 "zone_management": false, 00:16:31.857 "zone_append": false, 00:16:31.857 "compare": false, 00:16:31.857 "compare_and_write": false, 00:16:31.857 "abort": false, 00:16:31.857 "seek_hole": false, 00:16:31.857 "seek_data": false, 00:16:31.857 "copy": false, 00:16:31.857 "nvme_iov_md": false 00:16:31.857 }, 00:16:31.857 "memory_domains": [ 00:16:31.857 { 00:16:31.857 "dma_device_id": "system", 00:16:31.857 "dma_device_type": 1 00:16:31.857 }, 00:16:31.857 { 00:16:31.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.857 "dma_device_type": 2 00:16:31.857 }, 00:16:31.857 { 00:16:31.857 "dma_device_id": "system", 00:16:31.857 "dma_device_type": 1 00:16:31.857 }, 00:16:31.857 { 00:16:31.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.857 "dma_device_type": 2 00:16:31.857 } 00:16:31.857 ], 00:16:31.857 "driver_specific": { 00:16:31.857 "raid": { 00:16:31.857 "uuid": "642a61d2-7667-47dc-a6a7-4407fb35df55", 00:16:31.857 "strip_size_kb": 0, 00:16:31.857 "state": "online", 00:16:31.857 "raid_level": "raid1", 00:16:31.857 "superblock": true, 00:16:31.857 "num_base_bdevs": 2, 00:16:31.857 "num_base_bdevs_discovered": 2, 00:16:31.857 "num_base_bdevs_operational": 2, 00:16:31.857 "base_bdevs_list": [ 00:16:31.857 { 00:16:31.857 "name": "BaseBdev1", 00:16:31.857 "uuid": "e5dada7d-25b0-4922-8e8d-fcb6aa9b4d8b", 00:16:31.857 "is_configured": true, 00:16:31.857 "data_offset": 256, 00:16:31.857 "data_size": 7936 00:16:31.857 }, 00:16:31.857 { 00:16:31.857 "name": "BaseBdev2", 00:16:31.857 "uuid": "b2ef0833-944e-47a9-9f85-e1abe5931772", 00:16:31.857 "is_configured": true, 00:16:31.857 "data_offset": 256, 00:16:31.857 "data_size": 7936 00:16:31.857 } 00:16:31.857 ] 00:16:31.857 } 00:16:31.857 } 00:16:31.857 }' 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:31.857 BaseBdev2' 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.857 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.117 [2024-10-13 02:30:50.615995] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.117 "name": "Existed_Raid", 00:16:32.117 "uuid": "642a61d2-7667-47dc-a6a7-4407fb35df55", 00:16:32.117 "strip_size_kb": 0, 00:16:32.117 "state": "online", 00:16:32.117 "raid_level": "raid1", 00:16:32.117 "superblock": true, 00:16:32.117 "num_base_bdevs": 2, 00:16:32.117 "num_base_bdevs_discovered": 1, 00:16:32.117 "num_base_bdevs_operational": 1, 00:16:32.117 "base_bdevs_list": [ 00:16:32.117 { 00:16:32.117 "name": null, 00:16:32.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.117 "is_configured": false, 00:16:32.117 "data_offset": 0, 00:16:32.117 "data_size": 7936 00:16:32.117 }, 00:16:32.117 { 00:16:32.117 "name": "BaseBdev2", 00:16:32.117 "uuid": "b2ef0833-944e-47a9-9f85-e1abe5931772", 00:16:32.117 "is_configured": true, 00:16:32.117 "data_offset": 256, 00:16:32.117 "data_size": 7936 00:16:32.117 } 00:16:32.117 ] 00:16:32.117 }' 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.117 02:30:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.685 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:32.685 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.685 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.685 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.685 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.685 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:32.685 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.685 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:32.685 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:32.685 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:32.685 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.685 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.686 [2024-10-13 02:30:51.138727] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:32.686 [2024-10-13 02:30:51.138852] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.686 [2024-10-13 02:30:51.150558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.686 [2024-10-13 02:30:51.150610] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.686 [2024-10-13 02:30:51.150621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96252 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96252 ']' 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96252 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96252 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96252' 00:16:32.686 killing process with pid 96252 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96252 00:16:32.686 [2024-10-13 02:30:51.250430] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:32.686 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96252 00:16:32.686 [2024-10-13 02:30:51.251542] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:32.945 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:16:32.945 00:16:32.945 real 0m4.051s 00:16:32.945 user 0m6.332s 00:16:32.945 sys 0m0.905s 00:16:32.945 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:32.945 02:30:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.945 ************************************ 00:16:32.945 END TEST raid_state_function_test_sb_4k 00:16:32.945 ************************************ 00:16:32.945 02:30:51 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:32.945 02:30:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:32.945 02:30:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:32.945 02:30:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:32.945 ************************************ 00:16:32.945 START TEST raid_superblock_test_4k 00:16:32.945 ************************************ 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96492 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96492 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 96492 ']' 00:16:32.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:32.945 02:30:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:33.205 [2024-10-13 02:30:51.662847] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:33.205 [2024-10-13 02:30:51.663105] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96492 ] 00:16:33.205 [2024-10-13 02:30:51.806146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.205 [2024-10-13 02:30:51.857980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.464 [2024-10-13 02:30:51.900340] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.464 [2024-10-13 02:30:51.900469] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.033 malloc1 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.033 [2024-10-13 02:30:52.531076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:34.033 [2024-10-13 02:30:52.531241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.033 [2024-10-13 02:30:52.531304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:34.033 [2024-10-13 02:30:52.531342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.033 [2024-10-13 02:30:52.533579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.033 [2024-10-13 02:30:52.533673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:34.033 pt1 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.033 malloc2 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.033 [2024-10-13 02:30:52.573657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:34.033 [2024-10-13 02:30:52.573835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.033 [2024-10-13 02:30:52.573888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:34.033 [2024-10-13 02:30:52.573903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.033 [2024-10-13 02:30:52.576550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.033 [2024-10-13 02:30:52.576600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:34.033 pt2 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.033 [2024-10-13 02:30:52.585655] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:34.033 [2024-10-13 02:30:52.587626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:34.033 [2024-10-13 02:30:52.587792] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:34.033 [2024-10-13 02:30:52.587807] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:34.033 [2024-10-13 02:30:52.588116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:34.033 [2024-10-13 02:30:52.588259] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:34.033 [2024-10-13 02:30:52.588267] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:34.033 [2024-10-13 02:30:52.588428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.033 "name": "raid_bdev1", 00:16:34.033 "uuid": "5625f484-6dbc-485b-88aa-04683b25f06b", 00:16:34.033 "strip_size_kb": 0, 00:16:34.033 "state": "online", 00:16:34.033 "raid_level": "raid1", 00:16:34.033 "superblock": true, 00:16:34.033 "num_base_bdevs": 2, 00:16:34.033 "num_base_bdevs_discovered": 2, 00:16:34.033 "num_base_bdevs_operational": 2, 00:16:34.033 "base_bdevs_list": [ 00:16:34.033 { 00:16:34.033 "name": "pt1", 00:16:34.033 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:34.033 "is_configured": true, 00:16:34.033 "data_offset": 256, 00:16:34.033 "data_size": 7936 00:16:34.033 }, 00:16:34.033 { 00:16:34.033 "name": "pt2", 00:16:34.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:34.033 "is_configured": true, 00:16:34.033 "data_offset": 256, 00:16:34.033 "data_size": 7936 00:16:34.033 } 00:16:34.033 ] 00:16:34.033 }' 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.033 02:30:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.602 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:34.602 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:34.602 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:34.602 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:34.602 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:34.602 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:34.602 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:34.602 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:34.602 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.602 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.602 [2024-10-13 02:30:53.045218] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.602 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.602 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:34.602 "name": "raid_bdev1", 00:16:34.602 "aliases": [ 00:16:34.602 "5625f484-6dbc-485b-88aa-04683b25f06b" 00:16:34.602 ], 00:16:34.602 "product_name": "Raid Volume", 00:16:34.602 "block_size": 4096, 00:16:34.602 "num_blocks": 7936, 00:16:34.602 "uuid": "5625f484-6dbc-485b-88aa-04683b25f06b", 00:16:34.602 "assigned_rate_limits": { 00:16:34.602 "rw_ios_per_sec": 0, 00:16:34.602 "rw_mbytes_per_sec": 0, 00:16:34.602 "r_mbytes_per_sec": 0, 00:16:34.602 "w_mbytes_per_sec": 0 00:16:34.602 }, 00:16:34.602 "claimed": false, 00:16:34.602 "zoned": false, 00:16:34.602 "supported_io_types": { 00:16:34.602 "read": true, 00:16:34.602 "write": true, 00:16:34.602 "unmap": false, 00:16:34.602 "flush": false, 00:16:34.602 "reset": true, 00:16:34.602 "nvme_admin": false, 00:16:34.602 "nvme_io": false, 00:16:34.602 "nvme_io_md": false, 00:16:34.602 "write_zeroes": true, 00:16:34.602 "zcopy": false, 00:16:34.602 "get_zone_info": false, 00:16:34.602 "zone_management": false, 00:16:34.602 "zone_append": false, 00:16:34.602 "compare": false, 00:16:34.602 "compare_and_write": false, 00:16:34.602 "abort": false, 00:16:34.602 "seek_hole": false, 00:16:34.602 "seek_data": false, 00:16:34.602 "copy": false, 00:16:34.602 "nvme_iov_md": false 00:16:34.602 }, 00:16:34.602 "memory_domains": [ 00:16:34.602 { 00:16:34.602 "dma_device_id": "system", 00:16:34.602 "dma_device_type": 1 00:16:34.602 }, 00:16:34.602 { 00:16:34.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.602 "dma_device_type": 2 00:16:34.602 }, 00:16:34.602 { 00:16:34.602 "dma_device_id": "system", 00:16:34.602 "dma_device_type": 1 00:16:34.602 }, 00:16:34.602 { 00:16:34.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.602 "dma_device_type": 2 00:16:34.602 } 00:16:34.602 ], 00:16:34.602 "driver_specific": { 00:16:34.602 "raid": { 00:16:34.602 "uuid": "5625f484-6dbc-485b-88aa-04683b25f06b", 00:16:34.602 "strip_size_kb": 0, 00:16:34.602 "state": "online", 00:16:34.602 "raid_level": "raid1", 00:16:34.602 "superblock": true, 00:16:34.602 "num_base_bdevs": 2, 00:16:34.602 "num_base_bdevs_discovered": 2, 00:16:34.602 "num_base_bdevs_operational": 2, 00:16:34.602 "base_bdevs_list": [ 00:16:34.602 { 00:16:34.602 "name": "pt1", 00:16:34.602 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:34.602 "is_configured": true, 00:16:34.602 "data_offset": 256, 00:16:34.602 "data_size": 7936 00:16:34.602 }, 00:16:34.602 { 00:16:34.602 "name": "pt2", 00:16:34.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:34.602 "is_configured": true, 00:16:34.602 "data_offset": 256, 00:16:34.602 "data_size": 7936 00:16:34.602 } 00:16:34.602 ] 00:16:34.602 } 00:16:34.602 } 00:16:34.602 }' 00:16:34.602 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:34.602 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:34.602 pt2' 00:16:34.602 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.603 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.603 [2024-10-13 02:30:53.264736] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.862 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.862 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5625f484-6dbc-485b-88aa-04683b25f06b 00:16:34.862 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 5625f484-6dbc-485b-88aa-04683b25f06b ']' 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.863 [2024-10-13 02:30:53.296421] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:34.863 [2024-10-13 02:30:53.296465] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:34.863 [2024-10-13 02:30:53.296572] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.863 [2024-10-13 02:30:53.296638] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.863 [2024-10-13 02:30:53.296647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.863 [2024-10-13 02:30:53.436213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:34.863 [2024-10-13 02:30:53.438258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:34.863 [2024-10-13 02:30:53.438397] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:34.863 [2024-10-13 02:30:53.438504] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:34.863 [2024-10-13 02:30:53.438553] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:34.863 [2024-10-13 02:30:53.438615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:16:34.863 request: 00:16:34.863 { 00:16:34.863 "name": "raid_bdev1", 00:16:34.863 "raid_level": "raid1", 00:16:34.863 "base_bdevs": [ 00:16:34.863 "malloc1", 00:16:34.863 "malloc2" 00:16:34.863 ], 00:16:34.863 "superblock": false, 00:16:34.863 "method": "bdev_raid_create", 00:16:34.863 "req_id": 1 00:16:34.863 } 00:16:34.863 Got JSON-RPC error response 00:16:34.863 response: 00:16:34.863 { 00:16:34.863 "code": -17, 00:16:34.863 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:34.863 } 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.863 [2024-10-13 02:30:53.492065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:34.863 [2024-10-13 02:30:53.492253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.863 [2024-10-13 02:30:53.492285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:34.863 [2024-10-13 02:30:53.492295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.863 [2024-10-13 02:30:53.494574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.863 [2024-10-13 02:30:53.494615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:34.863 [2024-10-13 02:30:53.494718] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:34.863 [2024-10-13 02:30:53.494762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:34.863 pt1 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.863 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.123 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.123 "name": "raid_bdev1", 00:16:35.123 "uuid": "5625f484-6dbc-485b-88aa-04683b25f06b", 00:16:35.123 "strip_size_kb": 0, 00:16:35.123 "state": "configuring", 00:16:35.123 "raid_level": "raid1", 00:16:35.123 "superblock": true, 00:16:35.123 "num_base_bdevs": 2, 00:16:35.123 "num_base_bdevs_discovered": 1, 00:16:35.123 "num_base_bdevs_operational": 2, 00:16:35.123 "base_bdevs_list": [ 00:16:35.123 { 00:16:35.123 "name": "pt1", 00:16:35.123 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.123 "is_configured": true, 00:16:35.123 "data_offset": 256, 00:16:35.123 "data_size": 7936 00:16:35.123 }, 00:16:35.123 { 00:16:35.123 "name": null, 00:16:35.123 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.123 "is_configured": false, 00:16:35.123 "data_offset": 256, 00:16:35.123 "data_size": 7936 00:16:35.123 } 00:16:35.123 ] 00:16:35.123 }' 00:16:35.123 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.123 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.382 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:35.382 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:35.382 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:35.382 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:35.382 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.382 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.382 [2024-10-13 02:30:53.923344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:35.382 [2024-10-13 02:30:53.923523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.382 [2024-10-13 02:30:53.923565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:35.382 [2024-10-13 02:30:53.923595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.382 [2024-10-13 02:30:53.924072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.382 [2024-10-13 02:30:53.924137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:35.382 [2024-10-13 02:30:53.924249] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:35.383 [2024-10-13 02:30:53.924302] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:35.383 [2024-10-13 02:30:53.924425] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:35.383 [2024-10-13 02:30:53.924464] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:35.383 [2024-10-13 02:30:53.924734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:35.383 [2024-10-13 02:30:53.924892] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:35.383 [2024-10-13 02:30:53.924940] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:16:35.383 [2024-10-13 02:30:53.925088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.383 pt2 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.383 "name": "raid_bdev1", 00:16:35.383 "uuid": "5625f484-6dbc-485b-88aa-04683b25f06b", 00:16:35.383 "strip_size_kb": 0, 00:16:35.383 "state": "online", 00:16:35.383 "raid_level": "raid1", 00:16:35.383 "superblock": true, 00:16:35.383 "num_base_bdevs": 2, 00:16:35.383 "num_base_bdevs_discovered": 2, 00:16:35.383 "num_base_bdevs_operational": 2, 00:16:35.383 "base_bdevs_list": [ 00:16:35.383 { 00:16:35.383 "name": "pt1", 00:16:35.383 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.383 "is_configured": true, 00:16:35.383 "data_offset": 256, 00:16:35.383 "data_size": 7936 00:16:35.383 }, 00:16:35.383 { 00:16:35.383 "name": "pt2", 00:16:35.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.383 "is_configured": true, 00:16:35.383 "data_offset": 256, 00:16:35.383 "data_size": 7936 00:16:35.383 } 00:16:35.383 ] 00:16:35.383 }' 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.383 02:30:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.951 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:35.951 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:35.951 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:35.951 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:35.951 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:35.951 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:35.951 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:35.951 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:35.951 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.951 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.951 [2024-10-13 02:30:54.390820] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.951 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.951 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:35.951 "name": "raid_bdev1", 00:16:35.951 "aliases": [ 00:16:35.951 "5625f484-6dbc-485b-88aa-04683b25f06b" 00:16:35.951 ], 00:16:35.951 "product_name": "Raid Volume", 00:16:35.951 "block_size": 4096, 00:16:35.951 "num_blocks": 7936, 00:16:35.951 "uuid": "5625f484-6dbc-485b-88aa-04683b25f06b", 00:16:35.951 "assigned_rate_limits": { 00:16:35.952 "rw_ios_per_sec": 0, 00:16:35.952 "rw_mbytes_per_sec": 0, 00:16:35.952 "r_mbytes_per_sec": 0, 00:16:35.952 "w_mbytes_per_sec": 0 00:16:35.952 }, 00:16:35.952 "claimed": false, 00:16:35.952 "zoned": false, 00:16:35.952 "supported_io_types": { 00:16:35.952 "read": true, 00:16:35.952 "write": true, 00:16:35.952 "unmap": false, 00:16:35.952 "flush": false, 00:16:35.952 "reset": true, 00:16:35.952 "nvme_admin": false, 00:16:35.952 "nvme_io": false, 00:16:35.952 "nvme_io_md": false, 00:16:35.952 "write_zeroes": true, 00:16:35.952 "zcopy": false, 00:16:35.952 "get_zone_info": false, 00:16:35.952 "zone_management": false, 00:16:35.952 "zone_append": false, 00:16:35.952 "compare": false, 00:16:35.952 "compare_and_write": false, 00:16:35.952 "abort": false, 00:16:35.952 "seek_hole": false, 00:16:35.952 "seek_data": false, 00:16:35.952 "copy": false, 00:16:35.952 "nvme_iov_md": false 00:16:35.952 }, 00:16:35.952 "memory_domains": [ 00:16:35.952 { 00:16:35.952 "dma_device_id": "system", 00:16:35.952 "dma_device_type": 1 00:16:35.952 }, 00:16:35.952 { 00:16:35.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.952 "dma_device_type": 2 00:16:35.952 }, 00:16:35.952 { 00:16:35.952 "dma_device_id": "system", 00:16:35.952 "dma_device_type": 1 00:16:35.952 }, 00:16:35.952 { 00:16:35.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.952 "dma_device_type": 2 00:16:35.952 } 00:16:35.952 ], 00:16:35.952 "driver_specific": { 00:16:35.952 "raid": { 00:16:35.952 "uuid": "5625f484-6dbc-485b-88aa-04683b25f06b", 00:16:35.952 "strip_size_kb": 0, 00:16:35.952 "state": "online", 00:16:35.952 "raid_level": "raid1", 00:16:35.952 "superblock": true, 00:16:35.952 "num_base_bdevs": 2, 00:16:35.952 "num_base_bdevs_discovered": 2, 00:16:35.952 "num_base_bdevs_operational": 2, 00:16:35.952 "base_bdevs_list": [ 00:16:35.952 { 00:16:35.952 "name": "pt1", 00:16:35.952 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.952 "is_configured": true, 00:16:35.952 "data_offset": 256, 00:16:35.952 "data_size": 7936 00:16:35.952 }, 00:16:35.952 { 00:16:35.952 "name": "pt2", 00:16:35.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.952 "is_configured": true, 00:16:35.952 "data_offset": 256, 00:16:35.952 "data_size": 7936 00:16:35.952 } 00:16:35.952 ] 00:16:35.952 } 00:16:35.952 } 00:16:35.952 }' 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:35.952 pt2' 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.952 [2024-10-13 02:30:54.606455] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.952 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.211 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 5625f484-6dbc-485b-88aa-04683b25f06b '!=' 5625f484-6dbc-485b-88aa-04683b25f06b ']' 00:16:36.211 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:36.211 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:36.211 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:36.211 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:36.211 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.211 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.211 [2024-10-13 02:30:54.654183] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:36.211 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.211 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:36.211 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.211 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.211 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.211 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.211 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:36.211 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.212 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.212 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.212 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.212 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.212 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.212 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.212 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.212 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.212 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.212 "name": "raid_bdev1", 00:16:36.212 "uuid": "5625f484-6dbc-485b-88aa-04683b25f06b", 00:16:36.212 "strip_size_kb": 0, 00:16:36.212 "state": "online", 00:16:36.212 "raid_level": "raid1", 00:16:36.212 "superblock": true, 00:16:36.212 "num_base_bdevs": 2, 00:16:36.212 "num_base_bdevs_discovered": 1, 00:16:36.212 "num_base_bdevs_operational": 1, 00:16:36.212 "base_bdevs_list": [ 00:16:36.212 { 00:16:36.212 "name": null, 00:16:36.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.212 "is_configured": false, 00:16:36.212 "data_offset": 0, 00:16:36.212 "data_size": 7936 00:16:36.212 }, 00:16:36.212 { 00:16:36.212 "name": "pt2", 00:16:36.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.212 "is_configured": true, 00:16:36.212 "data_offset": 256, 00:16:36.212 "data_size": 7936 00:16:36.212 } 00:16:36.212 ] 00:16:36.212 }' 00:16:36.212 02:30:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.212 02:30:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.471 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:36.471 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.471 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.471 [2024-10-13 02:30:55.109362] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:36.471 [2024-10-13 02:30:55.109406] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.471 [2024-10-13 02:30:55.109511] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.471 [2024-10-13 02:30:55.109563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.471 [2024-10-13 02:30:55.109572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:16:36.471 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.471 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.471 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:36.471 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.471 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.471 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.730 [2024-10-13 02:30:55.181222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:36.730 [2024-10-13 02:30:55.181338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.730 [2024-10-13 02:30:55.181362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:36.730 [2024-10-13 02:30:55.181372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.730 [2024-10-13 02:30:55.183659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.730 [2024-10-13 02:30:55.183811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:36.730 [2024-10-13 02:30:55.183931] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:36.730 [2024-10-13 02:30:55.183969] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:36.730 [2024-10-13 02:30:55.184058] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:16:36.730 [2024-10-13 02:30:55.184066] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:36.730 [2024-10-13 02:30:55.184309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:36.730 [2024-10-13 02:30:55.184419] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:16:36.730 [2024-10-13 02:30:55.184431] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:16:36.730 [2024-10-13 02:30:55.184539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.730 pt2 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.730 "name": "raid_bdev1", 00:16:36.730 "uuid": "5625f484-6dbc-485b-88aa-04683b25f06b", 00:16:36.730 "strip_size_kb": 0, 00:16:36.730 "state": "online", 00:16:36.730 "raid_level": "raid1", 00:16:36.730 "superblock": true, 00:16:36.730 "num_base_bdevs": 2, 00:16:36.730 "num_base_bdevs_discovered": 1, 00:16:36.730 "num_base_bdevs_operational": 1, 00:16:36.730 "base_bdevs_list": [ 00:16:36.730 { 00:16:36.730 "name": null, 00:16:36.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.730 "is_configured": false, 00:16:36.730 "data_offset": 256, 00:16:36.730 "data_size": 7936 00:16:36.730 }, 00:16:36.730 { 00:16:36.730 "name": "pt2", 00:16:36.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.730 "is_configured": true, 00:16:36.730 "data_offset": 256, 00:16:36.730 "data_size": 7936 00:16:36.730 } 00:16:36.730 ] 00:16:36.730 }' 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.730 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.990 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:36.990 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.990 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.990 [2024-10-13 02:30:55.636410] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:36.990 [2024-10-13 02:30:55.636532] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.990 [2024-10-13 02:30:55.636641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.990 [2024-10-13 02:30:55.636691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.990 [2024-10-13 02:30:55.636703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:16:36.990 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.990 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.990 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.990 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:36.990 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.990 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.249 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.250 [2024-10-13 02:30:55.700338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:37.250 [2024-10-13 02:30:55.700508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.250 [2024-10-13 02:30:55.700553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:16:37.250 [2024-10-13 02:30:55.700597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.250 [2024-10-13 02:30:55.702844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.250 [2024-10-13 02:30:55.702965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:37.250 [2024-10-13 02:30:55.703076] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:37.250 [2024-10-13 02:30:55.703154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:37.250 [2024-10-13 02:30:55.703275] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:37.250 [2024-10-13 02:30:55.703291] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.250 [2024-10-13 02:30:55.703316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:16:37.250 [2024-10-13 02:30:55.703356] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:37.250 [2024-10-13 02:30:55.703427] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:16:37.250 [2024-10-13 02:30:55.703437] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:37.250 [2024-10-13 02:30:55.703660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:37.250 [2024-10-13 02:30:55.703786] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:16:37.250 [2024-10-13 02:30:55.703795] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:16:37.250 [2024-10-13 02:30:55.703921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.250 pt1 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.250 "name": "raid_bdev1", 00:16:37.250 "uuid": "5625f484-6dbc-485b-88aa-04683b25f06b", 00:16:37.250 "strip_size_kb": 0, 00:16:37.250 "state": "online", 00:16:37.250 "raid_level": "raid1", 00:16:37.250 "superblock": true, 00:16:37.250 "num_base_bdevs": 2, 00:16:37.250 "num_base_bdevs_discovered": 1, 00:16:37.250 "num_base_bdevs_operational": 1, 00:16:37.250 "base_bdevs_list": [ 00:16:37.250 { 00:16:37.250 "name": null, 00:16:37.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.250 "is_configured": false, 00:16:37.250 "data_offset": 256, 00:16:37.250 "data_size": 7936 00:16:37.250 }, 00:16:37.250 { 00:16:37.250 "name": "pt2", 00:16:37.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.250 "is_configured": true, 00:16:37.250 "data_offset": 256, 00:16:37.250 "data_size": 7936 00:16:37.250 } 00:16:37.250 ] 00:16:37.250 }' 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.250 02:30:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.511 02:30:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:37.511 02:30:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:37.511 02:30:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.511 02:30:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.511 02:30:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.511 02:30:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:37.511 02:30:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:37.511 02:30:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:37.511 02:30:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.511 02:30:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.511 [2024-10-13 02:30:56.155843] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.511 02:30:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.782 02:30:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 5625f484-6dbc-485b-88aa-04683b25f06b '!=' 5625f484-6dbc-485b-88aa-04683b25f06b ']' 00:16:37.782 02:30:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96492 00:16:37.782 02:30:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 96492 ']' 00:16:37.782 02:30:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 96492 00:16:37.782 02:30:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:16:37.782 02:30:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:37.782 02:30:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96492 00:16:37.782 02:30:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:37.782 02:30:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:37.782 02:30:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96492' 00:16:37.782 killing process with pid 96492 00:16:37.782 02:30:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 96492 00:16:37.782 02:30:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 96492 00:16:37.782 [2024-10-13 02:30:56.229856] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:37.782 [2024-10-13 02:30:56.229976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.782 [2024-10-13 02:30:56.230120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.782 [2024-10-13 02:30:56.230133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:16:37.782 [2024-10-13 02:30:56.253331] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:38.058 ************************************ 00:16:38.058 END TEST raid_superblock_test_4k 00:16:38.058 ************************************ 00:16:38.058 02:30:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:16:38.058 00:16:38.058 real 0m4.912s 00:16:38.058 user 0m7.975s 00:16:38.058 sys 0m1.121s 00:16:38.058 02:30:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.058 02:30:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.058 02:30:56 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:16:38.058 02:30:56 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:16:38.058 02:30:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:38.058 02:30:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.058 02:30:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:38.058 ************************************ 00:16:38.058 START TEST raid_rebuild_test_sb_4k 00:16:38.058 ************************************ 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=96805 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 96805 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96805 ']' 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.058 02:30:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:38.058 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:38.058 Zero copy mechanism will not be used. 00:16:38.058 [2024-10-13 02:30:56.646929] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:38.058 [2024-10-13 02:30:56.647056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96805 ] 00:16:38.317 [2024-10-13 02:30:56.785226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.317 [2024-10-13 02:30:56.836616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.317 [2024-10-13 02:30:56.879192] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.317 [2024-10-13 02:30:56.879228] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.885 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:38.885 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:16:38.885 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:38.885 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:16:38.885 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.885 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.885 BaseBdev1_malloc 00:16:38.885 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.885 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:38.885 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.885 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.885 [2024-10-13 02:30:57.510125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:38.885 [2024-10-13 02:30:57.510192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.885 [2024-10-13 02:30:57.510224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:38.885 [2024-10-13 02:30:57.510244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.885 [2024-10-13 02:30:57.512452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.885 [2024-10-13 02:30:57.512505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:38.885 BaseBdev1 00:16:38.885 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.885 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:38.886 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:16:38.886 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.886 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.886 BaseBdev2_malloc 00:16:38.886 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.886 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:38.886 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.886 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.886 [2024-10-13 02:30:57.546463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:38.886 [2024-10-13 02:30:57.546595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.886 [2024-10-13 02:30:57.546623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:38.886 [2024-10-13 02:30:57.546632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.886 [2024-10-13 02:30:57.548917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.886 [2024-10-13 02:30:57.548955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:38.886 BaseBdev2 00:16:38.886 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.886 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:16:38.886 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.886 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.145 spare_malloc 00:16:39.145 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.145 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:39.145 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.145 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.145 spare_delay 00:16:39.145 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.145 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:39.145 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.145 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.145 [2024-10-13 02:30:57.587233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:39.145 [2024-10-13 02:30:57.587307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.145 [2024-10-13 02:30:57.587337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:39.145 [2024-10-13 02:30:57.587346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.145 [2024-10-13 02:30:57.589546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.145 [2024-10-13 02:30:57.589636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:39.145 spare 00:16:39.145 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.145 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:39.145 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.145 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.145 [2024-10-13 02:30:57.599288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.145 [2024-10-13 02:30:57.601203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:39.145 [2024-10-13 02:30:57.601389] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:39.145 [2024-10-13 02:30:57.601403] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:39.145 [2024-10-13 02:30:57.601707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:39.145 [2024-10-13 02:30:57.601880] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:39.145 [2024-10-13 02:30:57.601894] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:39.145 [2024-10-13 02:30:57.602055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.145 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.145 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:39.145 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.146 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.146 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.146 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.146 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.146 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.146 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.146 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.146 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.146 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.146 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.146 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.146 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.146 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.146 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.146 "name": "raid_bdev1", 00:16:39.146 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:39.146 "strip_size_kb": 0, 00:16:39.146 "state": "online", 00:16:39.146 "raid_level": "raid1", 00:16:39.146 "superblock": true, 00:16:39.146 "num_base_bdevs": 2, 00:16:39.146 "num_base_bdevs_discovered": 2, 00:16:39.146 "num_base_bdevs_operational": 2, 00:16:39.146 "base_bdevs_list": [ 00:16:39.146 { 00:16:39.146 "name": "BaseBdev1", 00:16:39.146 "uuid": "bdd8d3c8-9b79-5d7f-8d46-9be79d7bf135", 00:16:39.146 "is_configured": true, 00:16:39.146 "data_offset": 256, 00:16:39.146 "data_size": 7936 00:16:39.146 }, 00:16:39.146 { 00:16:39.146 "name": "BaseBdev2", 00:16:39.146 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:39.146 "is_configured": true, 00:16:39.146 "data_offset": 256, 00:16:39.146 "data_size": 7936 00:16:39.146 } 00:16:39.146 ] 00:16:39.146 }' 00:16:39.146 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.146 02:30:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.405 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:39.405 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:39.405 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.405 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.405 [2024-10-13 02:30:58.082915] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:39.663 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:39.922 [2024-10-13 02:30:58.366244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:39.922 /dev/nbd0 00:16:39.922 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:39.922 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:39.922 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:39.922 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:16:39.922 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:39.922 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:39.922 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:39.922 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:16:39.922 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:39.922 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:39.922 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.922 1+0 records in 00:16:39.922 1+0 records out 00:16:39.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348135 s, 11.8 MB/s 00:16:39.922 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.922 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:16:39.922 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.922 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:39.922 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:16:39.923 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.923 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:39.923 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:39.923 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:39.923 02:30:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:40.492 7936+0 records in 00:16:40.492 7936+0 records out 00:16:40.492 32505856 bytes (33 MB, 31 MiB) copied, 0.650023 s, 50.0 MB/s 00:16:40.492 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:40.492 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:40.492 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:40.492 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:40.492 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:40.492 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:40.492 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:40.752 [2024-10-13 02:30:59.308142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.752 [2024-10-13 02:30:59.328225] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.752 "name": "raid_bdev1", 00:16:40.752 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:40.752 "strip_size_kb": 0, 00:16:40.752 "state": "online", 00:16:40.752 "raid_level": "raid1", 00:16:40.752 "superblock": true, 00:16:40.752 "num_base_bdevs": 2, 00:16:40.752 "num_base_bdevs_discovered": 1, 00:16:40.752 "num_base_bdevs_operational": 1, 00:16:40.752 "base_bdevs_list": [ 00:16:40.752 { 00:16:40.752 "name": null, 00:16:40.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.752 "is_configured": false, 00:16:40.752 "data_offset": 0, 00:16:40.752 "data_size": 7936 00:16:40.752 }, 00:16:40.752 { 00:16:40.752 "name": "BaseBdev2", 00:16:40.752 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:40.752 "is_configured": true, 00:16:40.752 "data_offset": 256, 00:16:40.752 "data_size": 7936 00:16:40.752 } 00:16:40.752 ] 00:16:40.752 }' 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.752 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.320 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:41.320 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.320 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.320 [2024-10-13 02:30:59.787537] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.320 [2024-10-13 02:30:59.791911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:16:41.320 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.320 02:30:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:41.320 [2024-10-13 02:30:59.793989] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:42.257 02:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.257 02:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.257 02:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.257 02:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.257 02:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.257 02:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.257 02:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.257 02:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.257 02:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.257 02:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.257 02:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.257 "name": "raid_bdev1", 00:16:42.257 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:42.257 "strip_size_kb": 0, 00:16:42.257 "state": "online", 00:16:42.257 "raid_level": "raid1", 00:16:42.257 "superblock": true, 00:16:42.257 "num_base_bdevs": 2, 00:16:42.257 "num_base_bdevs_discovered": 2, 00:16:42.257 "num_base_bdevs_operational": 2, 00:16:42.257 "process": { 00:16:42.257 "type": "rebuild", 00:16:42.257 "target": "spare", 00:16:42.257 "progress": { 00:16:42.257 "blocks": 2560, 00:16:42.257 "percent": 32 00:16:42.257 } 00:16:42.257 }, 00:16:42.257 "base_bdevs_list": [ 00:16:42.257 { 00:16:42.257 "name": "spare", 00:16:42.257 "uuid": "7ceb8369-e0bf-5b63-a759-8eab6673ddb4", 00:16:42.257 "is_configured": true, 00:16:42.257 "data_offset": 256, 00:16:42.257 "data_size": 7936 00:16:42.257 }, 00:16:42.257 { 00:16:42.257 "name": "BaseBdev2", 00:16:42.257 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:42.257 "is_configured": true, 00:16:42.257 "data_offset": 256, 00:16:42.257 "data_size": 7936 00:16:42.257 } 00:16:42.257 ] 00:16:42.257 }' 00:16:42.257 02:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.257 02:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.257 02:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.516 02:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.516 02:31:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:42.516 02:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.517 02:31:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.517 [2024-10-13 02:31:00.954652] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.517 [2024-10-13 02:31:00.999845] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:42.517 [2024-10-13 02:31:00.999943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.517 [2024-10-13 02:31:00.999966] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.517 [2024-10-13 02:31:00.999974] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.517 "name": "raid_bdev1", 00:16:42.517 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:42.517 "strip_size_kb": 0, 00:16:42.517 "state": "online", 00:16:42.517 "raid_level": "raid1", 00:16:42.517 "superblock": true, 00:16:42.517 "num_base_bdevs": 2, 00:16:42.517 "num_base_bdevs_discovered": 1, 00:16:42.517 "num_base_bdevs_operational": 1, 00:16:42.517 "base_bdevs_list": [ 00:16:42.517 { 00:16:42.517 "name": null, 00:16:42.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.517 "is_configured": false, 00:16:42.517 "data_offset": 0, 00:16:42.517 "data_size": 7936 00:16:42.517 }, 00:16:42.517 { 00:16:42.517 "name": "BaseBdev2", 00:16:42.517 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:42.517 "is_configured": true, 00:16:42.517 "data_offset": 256, 00:16:42.517 "data_size": 7936 00:16:42.517 } 00:16:42.517 ] 00:16:42.517 }' 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.517 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.776 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.776 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.776 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.776 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.776 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.776 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.776 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.776 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.776 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.035 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.035 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.035 "name": "raid_bdev1", 00:16:43.035 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:43.035 "strip_size_kb": 0, 00:16:43.035 "state": "online", 00:16:43.035 "raid_level": "raid1", 00:16:43.035 "superblock": true, 00:16:43.035 "num_base_bdevs": 2, 00:16:43.035 "num_base_bdevs_discovered": 1, 00:16:43.035 "num_base_bdevs_operational": 1, 00:16:43.035 "base_bdevs_list": [ 00:16:43.035 { 00:16:43.035 "name": null, 00:16:43.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.035 "is_configured": false, 00:16:43.035 "data_offset": 0, 00:16:43.035 "data_size": 7936 00:16:43.035 }, 00:16:43.035 { 00:16:43.035 "name": "BaseBdev2", 00:16:43.035 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:43.035 "is_configured": true, 00:16:43.035 "data_offset": 256, 00:16:43.035 "data_size": 7936 00:16:43.035 } 00:16:43.035 ] 00:16:43.035 }' 00:16:43.035 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.035 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.035 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.035 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.035 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:43.036 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.036 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.036 [2024-10-13 02:31:01.603760] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:43.036 [2024-10-13 02:31:01.608095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:16:43.036 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.036 02:31:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:43.036 [2024-10-13 02:31:01.610191] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:43.971 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.971 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.971 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.971 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.971 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.971 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.971 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.971 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.971 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.971 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.231 "name": "raid_bdev1", 00:16:44.231 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:44.231 "strip_size_kb": 0, 00:16:44.231 "state": "online", 00:16:44.231 "raid_level": "raid1", 00:16:44.231 "superblock": true, 00:16:44.231 "num_base_bdevs": 2, 00:16:44.231 "num_base_bdevs_discovered": 2, 00:16:44.231 "num_base_bdevs_operational": 2, 00:16:44.231 "process": { 00:16:44.231 "type": "rebuild", 00:16:44.231 "target": "spare", 00:16:44.231 "progress": { 00:16:44.231 "blocks": 2560, 00:16:44.231 "percent": 32 00:16:44.231 } 00:16:44.231 }, 00:16:44.231 "base_bdevs_list": [ 00:16:44.231 { 00:16:44.231 "name": "spare", 00:16:44.231 "uuid": "7ceb8369-e0bf-5b63-a759-8eab6673ddb4", 00:16:44.231 "is_configured": true, 00:16:44.231 "data_offset": 256, 00:16:44.231 "data_size": 7936 00:16:44.231 }, 00:16:44.231 { 00:16:44.231 "name": "BaseBdev2", 00:16:44.231 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:44.231 "is_configured": true, 00:16:44.231 "data_offset": 256, 00:16:44.231 "data_size": 7936 00:16:44.231 } 00:16:44.231 ] 00:16:44.231 }' 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:44.231 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=570 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.231 "name": "raid_bdev1", 00:16:44.231 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:44.231 "strip_size_kb": 0, 00:16:44.231 "state": "online", 00:16:44.231 "raid_level": "raid1", 00:16:44.231 "superblock": true, 00:16:44.231 "num_base_bdevs": 2, 00:16:44.231 "num_base_bdevs_discovered": 2, 00:16:44.231 "num_base_bdevs_operational": 2, 00:16:44.231 "process": { 00:16:44.231 "type": "rebuild", 00:16:44.231 "target": "spare", 00:16:44.231 "progress": { 00:16:44.231 "blocks": 2816, 00:16:44.231 "percent": 35 00:16:44.231 } 00:16:44.231 }, 00:16:44.231 "base_bdevs_list": [ 00:16:44.231 { 00:16:44.231 "name": "spare", 00:16:44.231 "uuid": "7ceb8369-e0bf-5b63-a759-8eab6673ddb4", 00:16:44.231 "is_configured": true, 00:16:44.231 "data_offset": 256, 00:16:44.231 "data_size": 7936 00:16:44.231 }, 00:16:44.231 { 00:16:44.231 "name": "BaseBdev2", 00:16:44.231 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:44.231 "is_configured": true, 00:16:44.231 "data_offset": 256, 00:16:44.231 "data_size": 7936 00:16:44.231 } 00:16:44.231 ] 00:16:44.231 }' 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.231 02:31:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:45.612 02:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:45.612 02:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.612 02:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.612 02:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.612 02:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.612 02:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.612 02:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.612 02:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.612 02:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.612 02:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.612 02:31:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.612 02:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.612 "name": "raid_bdev1", 00:16:45.612 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:45.612 "strip_size_kb": 0, 00:16:45.612 "state": "online", 00:16:45.612 "raid_level": "raid1", 00:16:45.612 "superblock": true, 00:16:45.612 "num_base_bdevs": 2, 00:16:45.612 "num_base_bdevs_discovered": 2, 00:16:45.612 "num_base_bdevs_operational": 2, 00:16:45.612 "process": { 00:16:45.612 "type": "rebuild", 00:16:45.612 "target": "spare", 00:16:45.612 "progress": { 00:16:45.612 "blocks": 5632, 00:16:45.612 "percent": 70 00:16:45.612 } 00:16:45.612 }, 00:16:45.612 "base_bdevs_list": [ 00:16:45.612 { 00:16:45.612 "name": "spare", 00:16:45.612 "uuid": "7ceb8369-e0bf-5b63-a759-8eab6673ddb4", 00:16:45.612 "is_configured": true, 00:16:45.612 "data_offset": 256, 00:16:45.612 "data_size": 7936 00:16:45.612 }, 00:16:45.612 { 00:16:45.612 "name": "BaseBdev2", 00:16:45.612 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:45.612 "is_configured": true, 00:16:45.612 "data_offset": 256, 00:16:45.612 "data_size": 7936 00:16:45.612 } 00:16:45.612 ] 00:16:45.612 }' 00:16:45.612 02:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.612 02:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.612 02:31:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.612 02:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.612 02:31:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:46.180 [2024-10-13 02:31:04.723566] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:46.180 [2024-10-13 02:31:04.723776] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:46.180 [2024-10-13 02:31:04.723986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.439 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:46.439 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.439 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.439 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.439 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.439 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.439 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.439 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.439 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.439 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.439 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.439 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.439 "name": "raid_bdev1", 00:16:46.439 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:46.439 "strip_size_kb": 0, 00:16:46.439 "state": "online", 00:16:46.439 "raid_level": "raid1", 00:16:46.439 "superblock": true, 00:16:46.439 "num_base_bdevs": 2, 00:16:46.439 "num_base_bdevs_discovered": 2, 00:16:46.439 "num_base_bdevs_operational": 2, 00:16:46.439 "base_bdevs_list": [ 00:16:46.439 { 00:16:46.439 "name": "spare", 00:16:46.439 "uuid": "7ceb8369-e0bf-5b63-a759-8eab6673ddb4", 00:16:46.439 "is_configured": true, 00:16:46.439 "data_offset": 256, 00:16:46.439 "data_size": 7936 00:16:46.439 }, 00:16:46.439 { 00:16:46.439 "name": "BaseBdev2", 00:16:46.439 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:46.439 "is_configured": true, 00:16:46.439 "data_offset": 256, 00:16:46.439 "data_size": 7936 00:16:46.439 } 00:16:46.439 ] 00:16:46.439 }' 00:16:46.439 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.699 "name": "raid_bdev1", 00:16:46.699 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:46.699 "strip_size_kb": 0, 00:16:46.699 "state": "online", 00:16:46.699 "raid_level": "raid1", 00:16:46.699 "superblock": true, 00:16:46.699 "num_base_bdevs": 2, 00:16:46.699 "num_base_bdevs_discovered": 2, 00:16:46.699 "num_base_bdevs_operational": 2, 00:16:46.699 "base_bdevs_list": [ 00:16:46.699 { 00:16:46.699 "name": "spare", 00:16:46.699 "uuid": "7ceb8369-e0bf-5b63-a759-8eab6673ddb4", 00:16:46.699 "is_configured": true, 00:16:46.699 "data_offset": 256, 00:16:46.699 "data_size": 7936 00:16:46.699 }, 00:16:46.699 { 00:16:46.699 "name": "BaseBdev2", 00:16:46.699 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:46.699 "is_configured": true, 00:16:46.699 "data_offset": 256, 00:16:46.699 "data_size": 7936 00:16:46.699 } 00:16:46.699 ] 00:16:46.699 }' 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.699 "name": "raid_bdev1", 00:16:46.699 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:46.699 "strip_size_kb": 0, 00:16:46.699 "state": "online", 00:16:46.699 "raid_level": "raid1", 00:16:46.699 "superblock": true, 00:16:46.699 "num_base_bdevs": 2, 00:16:46.699 "num_base_bdevs_discovered": 2, 00:16:46.699 "num_base_bdevs_operational": 2, 00:16:46.699 "base_bdevs_list": [ 00:16:46.699 { 00:16:46.699 "name": "spare", 00:16:46.699 "uuid": "7ceb8369-e0bf-5b63-a759-8eab6673ddb4", 00:16:46.699 "is_configured": true, 00:16:46.699 "data_offset": 256, 00:16:46.699 "data_size": 7936 00:16:46.699 }, 00:16:46.699 { 00:16:46.699 "name": "BaseBdev2", 00:16:46.699 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:46.699 "is_configured": true, 00:16:46.699 "data_offset": 256, 00:16:46.699 "data_size": 7936 00:16:46.699 } 00:16:46.699 ] 00:16:46.699 }' 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.699 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.267 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:47.267 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.267 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.267 [2024-10-13 02:31:05.750704] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:47.267 [2024-10-13 02:31:05.750807] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.267 [2024-10-13 02:31:05.750940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.267 [2024-10-13 02:31:05.751040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.268 [2024-10-13 02:31:05.751089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:47.268 02:31:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:47.527 /dev/nbd0 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:47.527 1+0 records in 00:16:47.527 1+0 records out 00:16:47.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569829 s, 7.2 MB/s 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:47.527 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:47.786 /dev/nbd1 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:47.786 1+0 records in 00:16:47.786 1+0 records out 00:16:47.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515132 s, 8.0 MB/s 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:47.786 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:48.045 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:48.045 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:48.045 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:48.045 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:48.045 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:48.045 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:48.045 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:48.045 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:48.045 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:48.045 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:48.304 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:48.304 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:48.304 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:48.304 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:48.304 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:48.304 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:48.304 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:48.304 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:48.304 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:48.304 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:48.304 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.304 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.304 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.304 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:48.304 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.304 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.304 [2024-10-13 02:31:06.927138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:48.305 [2024-10-13 02:31:06.927212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.305 [2024-10-13 02:31:06.927263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:48.305 [2024-10-13 02:31:06.927276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.305 [2024-10-13 02:31:06.929653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.305 [2024-10-13 02:31:06.929746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:48.305 [2024-10-13 02:31:06.929863] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:48.305 [2024-10-13 02:31:06.929970] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:48.305 [2024-10-13 02:31:06.930117] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:48.305 spare 00:16:48.305 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.305 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:48.305 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.305 02:31:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.564 [2024-10-13 02:31:07.030089] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:16:48.564 [2024-10-13 02:31:07.030141] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:48.564 [2024-10-13 02:31:07.030504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:16:48.564 [2024-10-13 02:31:07.030692] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:16:48.564 [2024-10-13 02:31:07.030705] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:16:48.564 [2024-10-13 02:31:07.030934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.564 "name": "raid_bdev1", 00:16:48.564 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:48.564 "strip_size_kb": 0, 00:16:48.564 "state": "online", 00:16:48.564 "raid_level": "raid1", 00:16:48.564 "superblock": true, 00:16:48.564 "num_base_bdevs": 2, 00:16:48.564 "num_base_bdevs_discovered": 2, 00:16:48.564 "num_base_bdevs_operational": 2, 00:16:48.564 "base_bdevs_list": [ 00:16:48.564 { 00:16:48.564 "name": "spare", 00:16:48.564 "uuid": "7ceb8369-e0bf-5b63-a759-8eab6673ddb4", 00:16:48.564 "is_configured": true, 00:16:48.564 "data_offset": 256, 00:16:48.564 "data_size": 7936 00:16:48.564 }, 00:16:48.564 { 00:16:48.564 "name": "BaseBdev2", 00:16:48.564 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:48.564 "is_configured": true, 00:16:48.564 "data_offset": 256, 00:16:48.564 "data_size": 7936 00:16:48.564 } 00:16:48.564 ] 00:16:48.564 }' 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.564 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.132 "name": "raid_bdev1", 00:16:49.132 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:49.132 "strip_size_kb": 0, 00:16:49.132 "state": "online", 00:16:49.132 "raid_level": "raid1", 00:16:49.132 "superblock": true, 00:16:49.132 "num_base_bdevs": 2, 00:16:49.132 "num_base_bdevs_discovered": 2, 00:16:49.132 "num_base_bdevs_operational": 2, 00:16:49.132 "base_bdevs_list": [ 00:16:49.132 { 00:16:49.132 "name": "spare", 00:16:49.132 "uuid": "7ceb8369-e0bf-5b63-a759-8eab6673ddb4", 00:16:49.132 "is_configured": true, 00:16:49.132 "data_offset": 256, 00:16:49.132 "data_size": 7936 00:16:49.132 }, 00:16:49.132 { 00:16:49.132 "name": "BaseBdev2", 00:16:49.132 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:49.132 "is_configured": true, 00:16:49.132 "data_offset": 256, 00:16:49.132 "data_size": 7936 00:16:49.132 } 00:16:49.132 ] 00:16:49.132 }' 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.132 [2024-10-13 02:31:07.721863] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.132 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.132 "name": "raid_bdev1", 00:16:49.132 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:49.132 "strip_size_kb": 0, 00:16:49.132 "state": "online", 00:16:49.132 "raid_level": "raid1", 00:16:49.132 "superblock": true, 00:16:49.132 "num_base_bdevs": 2, 00:16:49.132 "num_base_bdevs_discovered": 1, 00:16:49.132 "num_base_bdevs_operational": 1, 00:16:49.132 "base_bdevs_list": [ 00:16:49.132 { 00:16:49.132 "name": null, 00:16:49.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.132 "is_configured": false, 00:16:49.132 "data_offset": 0, 00:16:49.132 "data_size": 7936 00:16:49.132 }, 00:16:49.132 { 00:16:49.132 "name": "BaseBdev2", 00:16:49.132 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:49.132 "is_configured": true, 00:16:49.132 "data_offset": 256, 00:16:49.132 "data_size": 7936 00:16:49.133 } 00:16:49.133 ] 00:16:49.133 }' 00:16:49.133 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.133 02:31:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.700 02:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:49.700 02:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.700 02:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.700 [2024-10-13 02:31:08.221046] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:49.700 [2024-10-13 02:31:08.221343] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:49.700 [2024-10-13 02:31:08.221404] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:49.700 [2024-10-13 02:31:08.221484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:49.700 [2024-10-13 02:31:08.225609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:16:49.700 02:31:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.700 02:31:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:49.700 [2024-10-13 02:31:08.227674] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:50.637 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.637 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.637 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.637 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.637 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.637 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.637 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.637 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.637 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.637 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.637 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.637 "name": "raid_bdev1", 00:16:50.637 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:50.637 "strip_size_kb": 0, 00:16:50.637 "state": "online", 00:16:50.637 "raid_level": "raid1", 00:16:50.637 "superblock": true, 00:16:50.637 "num_base_bdevs": 2, 00:16:50.637 "num_base_bdevs_discovered": 2, 00:16:50.637 "num_base_bdevs_operational": 2, 00:16:50.637 "process": { 00:16:50.637 "type": "rebuild", 00:16:50.637 "target": "spare", 00:16:50.637 "progress": { 00:16:50.637 "blocks": 2560, 00:16:50.637 "percent": 32 00:16:50.637 } 00:16:50.637 }, 00:16:50.637 "base_bdevs_list": [ 00:16:50.637 { 00:16:50.637 "name": "spare", 00:16:50.637 "uuid": "7ceb8369-e0bf-5b63-a759-8eab6673ddb4", 00:16:50.637 "is_configured": true, 00:16:50.637 "data_offset": 256, 00:16:50.637 "data_size": 7936 00:16:50.637 }, 00:16:50.637 { 00:16:50.637 "name": "BaseBdev2", 00:16:50.637 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:50.637 "is_configured": true, 00:16:50.637 "data_offset": 256, 00:16:50.637 "data_size": 7936 00:16:50.637 } 00:16:50.637 ] 00:16:50.637 }' 00:16:50.637 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.903 [2024-10-13 02:31:09.392424] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:50.903 [2024-10-13 02:31:09.432813] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:50.903 [2024-10-13 02:31:09.432926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.903 [2024-10-13 02:31:09.432948] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:50.903 [2024-10-13 02:31:09.432957] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.903 "name": "raid_bdev1", 00:16:50.903 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:50.903 "strip_size_kb": 0, 00:16:50.903 "state": "online", 00:16:50.903 "raid_level": "raid1", 00:16:50.903 "superblock": true, 00:16:50.903 "num_base_bdevs": 2, 00:16:50.903 "num_base_bdevs_discovered": 1, 00:16:50.903 "num_base_bdevs_operational": 1, 00:16:50.903 "base_bdevs_list": [ 00:16:50.903 { 00:16:50.903 "name": null, 00:16:50.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.903 "is_configured": false, 00:16:50.903 "data_offset": 0, 00:16:50.903 "data_size": 7936 00:16:50.903 }, 00:16:50.903 { 00:16:50.903 "name": "BaseBdev2", 00:16:50.903 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:50.903 "is_configured": true, 00:16:50.903 "data_offset": 256, 00:16:50.903 "data_size": 7936 00:16:50.903 } 00:16:50.903 ] 00:16:50.903 }' 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.903 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.510 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:51.510 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.510 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.510 [2024-10-13 02:31:09.928666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:51.510 [2024-10-13 02:31:09.928839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.510 [2024-10-13 02:31:09.928908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:51.510 [2024-10-13 02:31:09.928960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.510 [2024-10-13 02:31:09.929480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.510 [2024-10-13 02:31:09.929540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:51.510 [2024-10-13 02:31:09.929667] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:51.510 [2024-10-13 02:31:09.929708] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:51.510 [2024-10-13 02:31:09.929752] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:51.510 [2024-10-13 02:31:09.929795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:51.510 [2024-10-13 02:31:09.933948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:16:51.510 spare 00:16:51.510 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.510 02:31:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:51.510 [2024-10-13 02:31:09.936050] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:52.445 02:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.445 02:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.445 02:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.445 02:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.445 02:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.445 02:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.445 02:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.445 02:31:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.445 02:31:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.445 02:31:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.445 02:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.445 "name": "raid_bdev1", 00:16:52.445 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:52.445 "strip_size_kb": 0, 00:16:52.445 "state": "online", 00:16:52.445 "raid_level": "raid1", 00:16:52.445 "superblock": true, 00:16:52.445 "num_base_bdevs": 2, 00:16:52.445 "num_base_bdevs_discovered": 2, 00:16:52.445 "num_base_bdevs_operational": 2, 00:16:52.445 "process": { 00:16:52.445 "type": "rebuild", 00:16:52.445 "target": "spare", 00:16:52.445 "progress": { 00:16:52.445 "blocks": 2560, 00:16:52.445 "percent": 32 00:16:52.445 } 00:16:52.445 }, 00:16:52.445 "base_bdevs_list": [ 00:16:52.445 { 00:16:52.445 "name": "spare", 00:16:52.445 "uuid": "7ceb8369-e0bf-5b63-a759-8eab6673ddb4", 00:16:52.445 "is_configured": true, 00:16:52.445 "data_offset": 256, 00:16:52.445 "data_size": 7936 00:16:52.445 }, 00:16:52.445 { 00:16:52.445 "name": "BaseBdev2", 00:16:52.445 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:52.445 "is_configured": true, 00:16:52.445 "data_offset": 256, 00:16:52.445 "data_size": 7936 00:16:52.445 } 00:16:52.445 ] 00:16:52.445 }' 00:16:52.445 02:31:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.445 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.445 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.445 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.445 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:52.445 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.445 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.445 [2024-10-13 02:31:11.100120] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:52.704 [2024-10-13 02:31:11.141043] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:52.704 [2024-10-13 02:31:11.141235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.704 [2024-10-13 02:31:11.141271] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:52.704 [2024-10-13 02:31:11.141295] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.704 "name": "raid_bdev1", 00:16:52.704 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:52.704 "strip_size_kb": 0, 00:16:52.704 "state": "online", 00:16:52.704 "raid_level": "raid1", 00:16:52.704 "superblock": true, 00:16:52.704 "num_base_bdevs": 2, 00:16:52.704 "num_base_bdevs_discovered": 1, 00:16:52.704 "num_base_bdevs_operational": 1, 00:16:52.704 "base_bdevs_list": [ 00:16:52.704 { 00:16:52.704 "name": null, 00:16:52.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.704 "is_configured": false, 00:16:52.704 "data_offset": 0, 00:16:52.704 "data_size": 7936 00:16:52.704 }, 00:16:52.704 { 00:16:52.704 "name": "BaseBdev2", 00:16:52.704 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:52.704 "is_configured": true, 00:16:52.704 "data_offset": 256, 00:16:52.704 "data_size": 7936 00:16:52.704 } 00:16:52.704 ] 00:16:52.704 }' 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.704 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.963 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:52.963 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.963 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:52.963 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:52.963 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.963 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.963 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.963 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.963 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.222 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.222 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.222 "name": "raid_bdev1", 00:16:53.222 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:53.222 "strip_size_kb": 0, 00:16:53.222 "state": "online", 00:16:53.222 "raid_level": "raid1", 00:16:53.222 "superblock": true, 00:16:53.222 "num_base_bdevs": 2, 00:16:53.222 "num_base_bdevs_discovered": 1, 00:16:53.222 "num_base_bdevs_operational": 1, 00:16:53.222 "base_bdevs_list": [ 00:16:53.222 { 00:16:53.222 "name": null, 00:16:53.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.222 "is_configured": false, 00:16:53.222 "data_offset": 0, 00:16:53.222 "data_size": 7936 00:16:53.222 }, 00:16:53.222 { 00:16:53.222 "name": "BaseBdev2", 00:16:53.222 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:53.222 "is_configured": true, 00:16:53.222 "data_offset": 256, 00:16:53.222 "data_size": 7936 00:16:53.222 } 00:16:53.222 ] 00:16:53.222 }' 00:16:53.222 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.222 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:53.222 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.222 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:53.222 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:53.222 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.222 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.222 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.222 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:53.222 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.222 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.222 [2024-10-13 02:31:11.780552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:53.222 [2024-10-13 02:31:11.780630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.222 [2024-10-13 02:31:11.780653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:53.222 [2024-10-13 02:31:11.780664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.222 [2024-10-13 02:31:11.781105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.222 [2024-10-13 02:31:11.781126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:53.222 [2024-10-13 02:31:11.781204] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:53.222 [2024-10-13 02:31:11.781231] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:53.222 [2024-10-13 02:31:11.781242] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:53.222 [2024-10-13 02:31:11.781254] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:53.222 BaseBdev1 00:16:53.222 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.222 02:31:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:54.159 02:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:54.159 02:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.159 02:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.159 02:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.159 02:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.159 02:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:54.159 02:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.159 02:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.159 02:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.159 02:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.159 02:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.159 02:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.159 02:31:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.159 02:31:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.159 02:31:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.418 02:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.418 "name": "raid_bdev1", 00:16:54.418 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:54.418 "strip_size_kb": 0, 00:16:54.418 "state": "online", 00:16:54.418 "raid_level": "raid1", 00:16:54.418 "superblock": true, 00:16:54.418 "num_base_bdevs": 2, 00:16:54.418 "num_base_bdevs_discovered": 1, 00:16:54.418 "num_base_bdevs_operational": 1, 00:16:54.418 "base_bdevs_list": [ 00:16:54.418 { 00:16:54.418 "name": null, 00:16:54.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.418 "is_configured": false, 00:16:54.418 "data_offset": 0, 00:16:54.418 "data_size": 7936 00:16:54.418 }, 00:16:54.418 { 00:16:54.418 "name": "BaseBdev2", 00:16:54.418 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:54.418 "is_configured": true, 00:16:54.418 "data_offset": 256, 00:16:54.418 "data_size": 7936 00:16:54.418 } 00:16:54.418 ] 00:16:54.418 }' 00:16:54.418 02:31:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.418 02:31:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.680 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:54.681 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.681 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:54.681 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:54.681 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.681 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.681 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.681 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.681 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.681 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.681 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.681 "name": "raid_bdev1", 00:16:54.681 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:54.681 "strip_size_kb": 0, 00:16:54.681 "state": "online", 00:16:54.681 "raid_level": "raid1", 00:16:54.681 "superblock": true, 00:16:54.681 "num_base_bdevs": 2, 00:16:54.681 "num_base_bdevs_discovered": 1, 00:16:54.681 "num_base_bdevs_operational": 1, 00:16:54.681 "base_bdevs_list": [ 00:16:54.681 { 00:16:54.681 "name": null, 00:16:54.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.681 "is_configured": false, 00:16:54.681 "data_offset": 0, 00:16:54.681 "data_size": 7936 00:16:54.681 }, 00:16:54.681 { 00:16:54.681 "name": "BaseBdev2", 00:16:54.681 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:54.681 "is_configured": true, 00:16:54.681 "data_offset": 256, 00:16:54.681 "data_size": 7936 00:16:54.681 } 00:16:54.681 ] 00:16:54.681 }' 00:16:54.681 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.681 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:54.681 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.940 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:54.940 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:54.940 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:16:54.940 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:54.940 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:54.940 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.940 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:54.940 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.940 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:54.940 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.940 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.940 [2024-10-13 02:31:13.402042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.940 [2024-10-13 02:31:13.402276] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:54.940 [2024-10-13 02:31:13.402337] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:54.940 request: 00:16:54.940 { 00:16:54.940 "base_bdev": "BaseBdev1", 00:16:54.940 "raid_bdev": "raid_bdev1", 00:16:54.940 "method": "bdev_raid_add_base_bdev", 00:16:54.940 "req_id": 1 00:16:54.940 } 00:16:54.940 Got JSON-RPC error response 00:16:54.940 response: 00:16:54.940 { 00:16:54.940 "code": -22, 00:16:54.940 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:54.940 } 00:16:54.940 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:54.940 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:16:54.940 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:54.940 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:54.940 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:54.940 02:31:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:55.877 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:55.877 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.877 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.877 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.877 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.877 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:55.877 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.877 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.877 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.877 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.877 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.877 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.877 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.877 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.877 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.877 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.877 "name": "raid_bdev1", 00:16:55.877 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:55.877 "strip_size_kb": 0, 00:16:55.877 "state": "online", 00:16:55.877 "raid_level": "raid1", 00:16:55.877 "superblock": true, 00:16:55.877 "num_base_bdevs": 2, 00:16:55.877 "num_base_bdevs_discovered": 1, 00:16:55.877 "num_base_bdevs_operational": 1, 00:16:55.877 "base_bdevs_list": [ 00:16:55.877 { 00:16:55.877 "name": null, 00:16:55.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.877 "is_configured": false, 00:16:55.877 "data_offset": 0, 00:16:55.877 "data_size": 7936 00:16:55.877 }, 00:16:55.877 { 00:16:55.877 "name": "BaseBdev2", 00:16:55.877 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:55.877 "is_configured": true, 00:16:55.877 "data_offset": 256, 00:16:55.877 "data_size": 7936 00:16:55.877 } 00:16:55.877 ] 00:16:55.877 }' 00:16:55.877 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.877 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.445 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:56.445 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.445 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:56.445 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:56.445 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.445 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.445 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.445 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.445 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.445 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.445 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.445 "name": "raid_bdev1", 00:16:56.445 "uuid": "97358fd1-eecd-4939-8043-556d7661ae42", 00:16:56.445 "strip_size_kb": 0, 00:16:56.445 "state": "online", 00:16:56.445 "raid_level": "raid1", 00:16:56.445 "superblock": true, 00:16:56.445 "num_base_bdevs": 2, 00:16:56.445 "num_base_bdevs_discovered": 1, 00:16:56.445 "num_base_bdevs_operational": 1, 00:16:56.445 "base_bdevs_list": [ 00:16:56.445 { 00:16:56.445 "name": null, 00:16:56.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.445 "is_configured": false, 00:16:56.445 "data_offset": 0, 00:16:56.445 "data_size": 7936 00:16:56.445 }, 00:16:56.445 { 00:16:56.445 "name": "BaseBdev2", 00:16:56.445 "uuid": "936d0625-1b47-575d-ba7e-0a7c63c3e931", 00:16:56.445 "is_configured": true, 00:16:56.445 "data_offset": 256, 00:16:56.445 "data_size": 7936 00:16:56.445 } 00:16:56.445 ] 00:16:56.445 }' 00:16:56.445 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.445 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:56.445 02:31:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.445 02:31:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:56.445 02:31:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 96805 00:16:56.445 02:31:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96805 ']' 00:16:56.445 02:31:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96805 00:16:56.445 02:31:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:16:56.445 02:31:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:56.445 02:31:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96805 00:16:56.445 02:31:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:56.445 killing process with pid 96805 00:16:56.445 Received shutdown signal, test time was about 60.000000 seconds 00:16:56.445 00:16:56.445 Latency(us) 00:16:56.445 [2024-10-13T02:31:15.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.445 [2024-10-13T02:31:15.129Z] =================================================================================================================== 00:16:56.445 [2024-10-13T02:31:15.129Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:56.445 02:31:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:56.445 02:31:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96805' 00:16:56.445 02:31:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96805 00:16:56.445 [2024-10-13 02:31:15.057135] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:56.445 02:31:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96805 00:16:56.445 [2024-10-13 02:31:15.057283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.445 [2024-10-13 02:31:15.057338] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.445 [2024-10-13 02:31:15.057347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:16:56.445 [2024-10-13 02:31:15.089315] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:56.704 02:31:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:16:56.704 00:16:56.704 real 0m18.759s 00:16:56.704 user 0m25.070s 00:16:56.704 sys 0m2.741s 00:16:56.704 ************************************ 00:16:56.704 END TEST raid_rebuild_test_sb_4k 00:16:56.704 ************************************ 00:16:56.704 02:31:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:56.704 02:31:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.704 02:31:15 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:16:56.704 02:31:15 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:16:56.704 02:31:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:56.704 02:31:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:56.704 02:31:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:56.963 ************************************ 00:16:56.963 START TEST raid_state_function_test_sb_md_separate 00:16:56.963 ************************************ 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97486 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97486' 00:16:56.963 Process raid pid: 97486 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97486 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97486 ']' 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:56.963 02:31:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:56.963 [2024-10-13 02:31:15.495846] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:56.963 [2024-10-13 02:31:15.496104] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.963 [2024-10-13 02:31:15.644106] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.222 [2024-10-13 02:31:15.696128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.222 [2024-10-13 02:31:15.738196] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.222 [2024-10-13 02:31:15.738233] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:57.790 [2024-10-13 02:31:16.343830] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:57.790 [2024-10-13 02:31:16.343904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:57.790 [2024-10-13 02:31:16.343919] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:57.790 [2024-10-13 02:31:16.343929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.790 "name": "Existed_Raid", 00:16:57.790 "uuid": "929c492d-25b3-41eb-bbe8-f2bcdf85cf92", 00:16:57.790 "strip_size_kb": 0, 00:16:57.790 "state": "configuring", 00:16:57.790 "raid_level": "raid1", 00:16:57.790 "superblock": true, 00:16:57.790 "num_base_bdevs": 2, 00:16:57.790 "num_base_bdevs_discovered": 0, 00:16:57.790 "num_base_bdevs_operational": 2, 00:16:57.790 "base_bdevs_list": [ 00:16:57.790 { 00:16:57.790 "name": "BaseBdev1", 00:16:57.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.790 "is_configured": false, 00:16:57.790 "data_offset": 0, 00:16:57.790 "data_size": 0 00:16:57.790 }, 00:16:57.790 { 00:16:57.790 "name": "BaseBdev2", 00:16:57.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.790 "is_configured": false, 00:16:57.790 "data_offset": 0, 00:16:57.790 "data_size": 0 00:16:57.790 } 00:16:57.790 ] 00:16:57.790 }' 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.790 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.361 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:58.361 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.361 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.361 [2024-10-13 02:31:16.786925] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:58.361 [2024-10-13 02:31:16.786975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:16:58.361 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.361 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:58.361 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.361 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.361 [2024-10-13 02:31:16.794910] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:58.361 [2024-10-13 02:31:16.794961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:58.361 [2024-10-13 02:31:16.794979] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:58.361 [2024-10-13 02:31:16.794989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:58.361 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.362 [2024-10-13 02:31:16.812420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.362 BaseBdev1 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.362 [ 00:16:58.362 { 00:16:58.362 "name": "BaseBdev1", 00:16:58.362 "aliases": [ 00:16:58.362 "cc86c545-8301-42f1-bb94-9014f787d504" 00:16:58.362 ], 00:16:58.362 "product_name": "Malloc disk", 00:16:58.362 "block_size": 4096, 00:16:58.362 "num_blocks": 8192, 00:16:58.362 "uuid": "cc86c545-8301-42f1-bb94-9014f787d504", 00:16:58.362 "md_size": 32, 00:16:58.362 "md_interleave": false, 00:16:58.362 "dif_type": 0, 00:16:58.362 "assigned_rate_limits": { 00:16:58.362 "rw_ios_per_sec": 0, 00:16:58.362 "rw_mbytes_per_sec": 0, 00:16:58.362 "r_mbytes_per_sec": 0, 00:16:58.362 "w_mbytes_per_sec": 0 00:16:58.362 }, 00:16:58.362 "claimed": true, 00:16:58.362 "claim_type": "exclusive_write", 00:16:58.362 "zoned": false, 00:16:58.362 "supported_io_types": { 00:16:58.362 "read": true, 00:16:58.362 "write": true, 00:16:58.362 "unmap": true, 00:16:58.362 "flush": true, 00:16:58.362 "reset": true, 00:16:58.362 "nvme_admin": false, 00:16:58.362 "nvme_io": false, 00:16:58.362 "nvme_io_md": false, 00:16:58.362 "write_zeroes": true, 00:16:58.362 "zcopy": true, 00:16:58.362 "get_zone_info": false, 00:16:58.362 "zone_management": false, 00:16:58.362 "zone_append": false, 00:16:58.362 "compare": false, 00:16:58.362 "compare_and_write": false, 00:16:58.362 "abort": true, 00:16:58.362 "seek_hole": false, 00:16:58.362 "seek_data": false, 00:16:58.362 "copy": true, 00:16:58.362 "nvme_iov_md": false 00:16:58.362 }, 00:16:58.362 "memory_domains": [ 00:16:58.362 { 00:16:58.362 "dma_device_id": "system", 00:16:58.362 "dma_device_type": 1 00:16:58.362 }, 00:16:58.362 { 00:16:58.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.362 "dma_device_type": 2 00:16:58.362 } 00:16:58.362 ], 00:16:58.362 "driver_specific": {} 00:16:58.362 } 00:16:58.362 ] 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.362 "name": "Existed_Raid", 00:16:58.362 "uuid": "7b4f43b5-b043-4768-87d8-412be928692b", 00:16:58.362 "strip_size_kb": 0, 00:16:58.362 "state": "configuring", 00:16:58.362 "raid_level": "raid1", 00:16:58.362 "superblock": true, 00:16:58.362 "num_base_bdevs": 2, 00:16:58.362 "num_base_bdevs_discovered": 1, 00:16:58.362 "num_base_bdevs_operational": 2, 00:16:58.362 "base_bdevs_list": [ 00:16:58.362 { 00:16:58.362 "name": "BaseBdev1", 00:16:58.362 "uuid": "cc86c545-8301-42f1-bb94-9014f787d504", 00:16:58.362 "is_configured": true, 00:16:58.362 "data_offset": 256, 00:16:58.362 "data_size": 7936 00:16:58.362 }, 00:16:58.362 { 00:16:58.362 "name": "BaseBdev2", 00:16:58.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.362 "is_configured": false, 00:16:58.362 "data_offset": 0, 00:16:58.362 "data_size": 0 00:16:58.362 } 00:16:58.362 ] 00:16:58.362 }' 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.362 02:31:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.931 [2024-10-13 02:31:17.327694] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:58.931 [2024-10-13 02:31:17.327759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.931 [2024-10-13 02:31:17.335757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.931 [2024-10-13 02:31:17.337782] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:58.931 [2024-10-13 02:31:17.337830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.931 "name": "Existed_Raid", 00:16:58.931 "uuid": "bd07c254-bdc5-4cc5-a989-aebf2eb7e926", 00:16:58.931 "strip_size_kb": 0, 00:16:58.931 "state": "configuring", 00:16:58.931 "raid_level": "raid1", 00:16:58.931 "superblock": true, 00:16:58.931 "num_base_bdevs": 2, 00:16:58.931 "num_base_bdevs_discovered": 1, 00:16:58.931 "num_base_bdevs_operational": 2, 00:16:58.931 "base_bdevs_list": [ 00:16:58.931 { 00:16:58.931 "name": "BaseBdev1", 00:16:58.931 "uuid": "cc86c545-8301-42f1-bb94-9014f787d504", 00:16:58.931 "is_configured": true, 00:16:58.931 "data_offset": 256, 00:16:58.931 "data_size": 7936 00:16:58.931 }, 00:16:58.931 { 00:16:58.931 "name": "BaseBdev2", 00:16:58.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.931 "is_configured": false, 00:16:58.931 "data_offset": 0, 00:16:58.931 "data_size": 0 00:16:58.931 } 00:16:58.931 ] 00:16:58.931 }' 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.931 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:59.190 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:16:59.190 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.190 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:59.190 [2024-10-13 02:31:17.857840] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:59.190 [2024-10-13 02:31:17.858104] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:59.190 [2024-10-13 02:31:17.858121] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:59.190 [2024-10-13 02:31:17.858264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:59.190 [2024-10-13 02:31:17.858410] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:59.190 [2024-10-13 02:31:17.858438] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:16:59.190 [2024-10-13 02:31:17.858577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.190 BaseBdev2 00:16:59.190 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.190 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:59.190 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:59.190 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:59.190 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:16:59.190 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:59.190 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:59.190 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:59.190 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.190 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:59.190 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.190 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:59.190 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.190 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:59.451 [ 00:16:59.451 { 00:16:59.451 "name": "BaseBdev2", 00:16:59.451 "aliases": [ 00:16:59.451 "4dcb1ca1-bc7d-4e8c-9010-b9b61ce7c91b" 00:16:59.451 ], 00:16:59.451 "product_name": "Malloc disk", 00:16:59.451 "block_size": 4096, 00:16:59.451 "num_blocks": 8192, 00:16:59.451 "uuid": "4dcb1ca1-bc7d-4e8c-9010-b9b61ce7c91b", 00:16:59.451 "md_size": 32, 00:16:59.451 "md_interleave": false, 00:16:59.451 "dif_type": 0, 00:16:59.451 "assigned_rate_limits": { 00:16:59.451 "rw_ios_per_sec": 0, 00:16:59.451 "rw_mbytes_per_sec": 0, 00:16:59.451 "r_mbytes_per_sec": 0, 00:16:59.451 "w_mbytes_per_sec": 0 00:16:59.451 }, 00:16:59.451 "claimed": true, 00:16:59.451 "claim_type": "exclusive_write", 00:16:59.451 "zoned": false, 00:16:59.451 "supported_io_types": { 00:16:59.451 "read": true, 00:16:59.451 "write": true, 00:16:59.451 "unmap": true, 00:16:59.451 "flush": true, 00:16:59.451 "reset": true, 00:16:59.451 "nvme_admin": false, 00:16:59.451 "nvme_io": false, 00:16:59.451 "nvme_io_md": false, 00:16:59.451 "write_zeroes": true, 00:16:59.451 "zcopy": true, 00:16:59.451 "get_zone_info": false, 00:16:59.451 "zone_management": false, 00:16:59.451 "zone_append": false, 00:16:59.451 "compare": false, 00:16:59.451 "compare_and_write": false, 00:16:59.451 "abort": true, 00:16:59.451 "seek_hole": false, 00:16:59.451 "seek_data": false, 00:16:59.451 "copy": true, 00:16:59.451 "nvme_iov_md": false 00:16:59.452 }, 00:16:59.452 "memory_domains": [ 00:16:59.452 { 00:16:59.452 "dma_device_id": "system", 00:16:59.452 "dma_device_type": 1 00:16:59.452 }, 00:16:59.452 { 00:16:59.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.452 "dma_device_type": 2 00:16:59.452 } 00:16:59.452 ], 00:16:59.452 "driver_specific": {} 00:16:59.452 } 00:16:59.452 ] 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.452 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.453 "name": "Existed_Raid", 00:16:59.453 "uuid": "bd07c254-bdc5-4cc5-a989-aebf2eb7e926", 00:16:59.453 "strip_size_kb": 0, 00:16:59.453 "state": "online", 00:16:59.453 "raid_level": "raid1", 00:16:59.453 "superblock": true, 00:16:59.453 "num_base_bdevs": 2, 00:16:59.453 "num_base_bdevs_discovered": 2, 00:16:59.453 "num_base_bdevs_operational": 2, 00:16:59.453 "base_bdevs_list": [ 00:16:59.453 { 00:16:59.453 "name": "BaseBdev1", 00:16:59.453 "uuid": "cc86c545-8301-42f1-bb94-9014f787d504", 00:16:59.453 "is_configured": true, 00:16:59.453 "data_offset": 256, 00:16:59.453 "data_size": 7936 00:16:59.453 }, 00:16:59.453 { 00:16:59.453 "name": "BaseBdev2", 00:16:59.453 "uuid": "4dcb1ca1-bc7d-4e8c-9010-b9b61ce7c91b", 00:16:59.453 "is_configured": true, 00:16:59.453 "data_offset": 256, 00:16:59.453 "data_size": 7936 00:16:59.453 } 00:16:59.453 ] 00:16:59.453 }' 00:16:59.453 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.453 02:31:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:59.713 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:59.713 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:59.713 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:59.713 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:59.713 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:59.713 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:59.713 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:59.713 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.713 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:59.713 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:59.713 [2024-10-13 02:31:18.365360] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.713 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.972 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:59.972 "name": "Existed_Raid", 00:16:59.972 "aliases": [ 00:16:59.972 "bd07c254-bdc5-4cc5-a989-aebf2eb7e926" 00:16:59.972 ], 00:16:59.972 "product_name": "Raid Volume", 00:16:59.972 "block_size": 4096, 00:16:59.972 "num_blocks": 7936, 00:16:59.972 "uuid": "bd07c254-bdc5-4cc5-a989-aebf2eb7e926", 00:16:59.972 "md_size": 32, 00:16:59.972 "md_interleave": false, 00:16:59.972 "dif_type": 0, 00:16:59.972 "assigned_rate_limits": { 00:16:59.972 "rw_ios_per_sec": 0, 00:16:59.972 "rw_mbytes_per_sec": 0, 00:16:59.972 "r_mbytes_per_sec": 0, 00:16:59.972 "w_mbytes_per_sec": 0 00:16:59.972 }, 00:16:59.972 "claimed": false, 00:16:59.972 "zoned": false, 00:16:59.972 "supported_io_types": { 00:16:59.972 "read": true, 00:16:59.972 "write": true, 00:16:59.972 "unmap": false, 00:16:59.972 "flush": false, 00:16:59.972 "reset": true, 00:16:59.972 "nvme_admin": false, 00:16:59.972 "nvme_io": false, 00:16:59.972 "nvme_io_md": false, 00:16:59.972 "write_zeroes": true, 00:16:59.972 "zcopy": false, 00:16:59.972 "get_zone_info": false, 00:16:59.972 "zone_management": false, 00:16:59.972 "zone_append": false, 00:16:59.972 "compare": false, 00:16:59.972 "compare_and_write": false, 00:16:59.972 "abort": false, 00:16:59.972 "seek_hole": false, 00:16:59.972 "seek_data": false, 00:16:59.972 "copy": false, 00:16:59.972 "nvme_iov_md": false 00:16:59.972 }, 00:16:59.972 "memory_domains": [ 00:16:59.972 { 00:16:59.972 "dma_device_id": "system", 00:16:59.972 "dma_device_type": 1 00:16:59.972 }, 00:16:59.972 { 00:16:59.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.972 "dma_device_type": 2 00:16:59.972 }, 00:16:59.972 { 00:16:59.972 "dma_device_id": "system", 00:16:59.972 "dma_device_type": 1 00:16:59.972 }, 00:16:59.972 { 00:16:59.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.972 "dma_device_type": 2 00:16:59.972 } 00:16:59.972 ], 00:16:59.972 "driver_specific": { 00:16:59.972 "raid": { 00:16:59.972 "uuid": "bd07c254-bdc5-4cc5-a989-aebf2eb7e926", 00:16:59.972 "strip_size_kb": 0, 00:16:59.972 "state": "online", 00:16:59.972 "raid_level": "raid1", 00:16:59.972 "superblock": true, 00:16:59.972 "num_base_bdevs": 2, 00:16:59.972 "num_base_bdevs_discovered": 2, 00:16:59.972 "num_base_bdevs_operational": 2, 00:16:59.972 "base_bdevs_list": [ 00:16:59.972 { 00:16:59.972 "name": "BaseBdev1", 00:16:59.972 "uuid": "cc86c545-8301-42f1-bb94-9014f787d504", 00:16:59.972 "is_configured": true, 00:16:59.972 "data_offset": 256, 00:16:59.972 "data_size": 7936 00:16:59.972 }, 00:16:59.972 { 00:16:59.972 "name": "BaseBdev2", 00:16:59.972 "uuid": "4dcb1ca1-bc7d-4e8c-9010-b9b61ce7c91b", 00:16:59.972 "is_configured": true, 00:16:59.972 "data_offset": 256, 00:16:59.972 "data_size": 7936 00:16:59.972 } 00:16:59.972 ] 00:16:59.972 } 00:16:59.972 } 00:16:59.972 }' 00:16:59.972 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:59.972 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:59.972 BaseBdev2' 00:16:59.972 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.972 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:59.972 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.972 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:59.973 [2024-10-13 02:31:18.580843] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.973 "name": "Existed_Raid", 00:16:59.973 "uuid": "bd07c254-bdc5-4cc5-a989-aebf2eb7e926", 00:16:59.973 "strip_size_kb": 0, 00:16:59.973 "state": "online", 00:16:59.973 "raid_level": "raid1", 00:16:59.973 "superblock": true, 00:16:59.973 "num_base_bdevs": 2, 00:16:59.973 "num_base_bdevs_discovered": 1, 00:16:59.973 "num_base_bdevs_operational": 1, 00:16:59.973 "base_bdevs_list": [ 00:16:59.973 { 00:16:59.973 "name": null, 00:16:59.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.973 "is_configured": false, 00:16:59.973 "data_offset": 0, 00:16:59.973 "data_size": 7936 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "name": "BaseBdev2", 00:16:59.973 "uuid": "4dcb1ca1-bc7d-4e8c-9010-b9b61ce7c91b", 00:16:59.973 "is_configured": true, 00:16:59.973 "data_offset": 256, 00:16:59.973 "data_size": 7936 00:16:59.973 } 00:16:59.973 ] 00:16:59.973 }' 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.973 02:31:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.541 [2024-10-13 02:31:19.096154] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:00.541 [2024-10-13 02:31:19.096275] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.541 [2024-10-13 02:31:19.108791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.541 [2024-10-13 02:31:19.108845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.541 [2024-10-13 02:31:19.108874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97486 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97486 ']' 00:17:00.541 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97486 00:17:00.542 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:17:00.542 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:00.542 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97486 00:17:00.542 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:00.542 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:00.542 killing process with pid 97486 00:17:00.542 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97486' 00:17:00.542 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97486 00:17:00.542 [2024-10-13 02:31:19.202857] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.542 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97486 00:17:00.542 [2024-10-13 02:31:19.203944] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.800 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:00.800 00:17:00.801 real 0m4.055s 00:17:00.801 user 0m6.369s 00:17:00.801 sys 0m0.890s 00:17:00.801 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:00.801 02:31:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.801 ************************************ 00:17:00.801 END TEST raid_state_function_test_sb_md_separate 00:17:00.801 ************************************ 00:17:01.059 02:31:19 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:01.059 02:31:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:01.059 02:31:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:01.059 02:31:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:01.059 ************************************ 00:17:01.059 START TEST raid_superblock_test_md_separate 00:17:01.059 ************************************ 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=97723 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 97723 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97723 ']' 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.059 02:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:01.060 02:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.060 02:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:01.060 02:31:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.060 [2024-10-13 02:31:19.614999] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:01.060 [2024-10-13 02:31:19.615490] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97723 ] 00:17:01.319 [2024-10-13 02:31:19.760234] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.319 [2024-10-13 02:31:19.814367] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.319 [2024-10-13 02:31:19.856967] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.319 [2024-10-13 02:31:19.857010] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.886 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:01.886 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:17:01.886 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:01.886 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:01.886 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:01.886 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:01.886 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:01.886 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.886 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.886 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.886 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:01.886 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.886 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.886 malloc1 00:17:01.886 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.886 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:01.886 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.886 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.886 [2024-10-13 02:31:20.488367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:01.886 [2024-10-13 02:31:20.488435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.886 [2024-10-13 02:31:20.488460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:01.886 [2024-10-13 02:31:20.488472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.886 [2024-10-13 02:31:20.490485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.886 [2024-10-13 02:31:20.490529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:01.886 pt1 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.887 malloc2 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.887 [2024-10-13 02:31:20.532820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:01.887 [2024-10-13 02:31:20.532961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.887 [2024-10-13 02:31:20.533007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:01.887 [2024-10-13 02:31:20.533052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.887 [2024-10-13 02:31:20.535063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.887 [2024-10-13 02:31:20.535139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:01.887 pt2 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.887 [2024-10-13 02:31:20.544828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:01.887 [2024-10-13 02:31:20.546768] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.887 [2024-10-13 02:31:20.547029] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:17:01.887 [2024-10-13 02:31:20.547082] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:01.887 [2024-10-13 02:31:20.547196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:17:01.887 [2024-10-13 02:31:20.547343] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:17:01.887 [2024-10-13 02:31:20.547388] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:17:01.887 [2024-10-13 02:31:20.547522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.887 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.146 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.146 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.146 "name": "raid_bdev1", 00:17:02.146 "uuid": "438f3f58-9f6c-4d1d-956d-246384cbe44e", 00:17:02.146 "strip_size_kb": 0, 00:17:02.146 "state": "online", 00:17:02.146 "raid_level": "raid1", 00:17:02.146 "superblock": true, 00:17:02.146 "num_base_bdevs": 2, 00:17:02.146 "num_base_bdevs_discovered": 2, 00:17:02.146 "num_base_bdevs_operational": 2, 00:17:02.146 "base_bdevs_list": [ 00:17:02.146 { 00:17:02.146 "name": "pt1", 00:17:02.146 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.146 "is_configured": true, 00:17:02.146 "data_offset": 256, 00:17:02.146 "data_size": 7936 00:17:02.146 }, 00:17:02.146 { 00:17:02.146 "name": "pt2", 00:17:02.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.146 "is_configured": true, 00:17:02.146 "data_offset": 256, 00:17:02.146 "data_size": 7936 00:17:02.146 } 00:17:02.146 ] 00:17:02.146 }' 00:17:02.146 02:31:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.146 02:31:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.405 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:02.405 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:02.405 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:02.405 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:02.405 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:02.405 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:02.405 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:02.405 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:02.405 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.405 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.405 [2024-10-13 02:31:21.036326] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.405 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.405 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:02.405 "name": "raid_bdev1", 00:17:02.405 "aliases": [ 00:17:02.405 "438f3f58-9f6c-4d1d-956d-246384cbe44e" 00:17:02.405 ], 00:17:02.405 "product_name": "Raid Volume", 00:17:02.405 "block_size": 4096, 00:17:02.405 "num_blocks": 7936, 00:17:02.405 "uuid": "438f3f58-9f6c-4d1d-956d-246384cbe44e", 00:17:02.405 "md_size": 32, 00:17:02.405 "md_interleave": false, 00:17:02.405 "dif_type": 0, 00:17:02.405 "assigned_rate_limits": { 00:17:02.405 "rw_ios_per_sec": 0, 00:17:02.405 "rw_mbytes_per_sec": 0, 00:17:02.405 "r_mbytes_per_sec": 0, 00:17:02.405 "w_mbytes_per_sec": 0 00:17:02.405 }, 00:17:02.405 "claimed": false, 00:17:02.405 "zoned": false, 00:17:02.405 "supported_io_types": { 00:17:02.405 "read": true, 00:17:02.405 "write": true, 00:17:02.405 "unmap": false, 00:17:02.405 "flush": false, 00:17:02.405 "reset": true, 00:17:02.405 "nvme_admin": false, 00:17:02.405 "nvme_io": false, 00:17:02.405 "nvme_io_md": false, 00:17:02.405 "write_zeroes": true, 00:17:02.405 "zcopy": false, 00:17:02.405 "get_zone_info": false, 00:17:02.405 "zone_management": false, 00:17:02.405 "zone_append": false, 00:17:02.405 "compare": false, 00:17:02.405 "compare_and_write": false, 00:17:02.405 "abort": false, 00:17:02.405 "seek_hole": false, 00:17:02.405 "seek_data": false, 00:17:02.405 "copy": false, 00:17:02.405 "nvme_iov_md": false 00:17:02.405 }, 00:17:02.405 "memory_domains": [ 00:17:02.405 { 00:17:02.405 "dma_device_id": "system", 00:17:02.405 "dma_device_type": 1 00:17:02.405 }, 00:17:02.405 { 00:17:02.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.405 "dma_device_type": 2 00:17:02.405 }, 00:17:02.405 { 00:17:02.405 "dma_device_id": "system", 00:17:02.405 "dma_device_type": 1 00:17:02.405 }, 00:17:02.405 { 00:17:02.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.405 "dma_device_type": 2 00:17:02.405 } 00:17:02.405 ], 00:17:02.405 "driver_specific": { 00:17:02.405 "raid": { 00:17:02.405 "uuid": "438f3f58-9f6c-4d1d-956d-246384cbe44e", 00:17:02.405 "strip_size_kb": 0, 00:17:02.405 "state": "online", 00:17:02.405 "raid_level": "raid1", 00:17:02.405 "superblock": true, 00:17:02.405 "num_base_bdevs": 2, 00:17:02.405 "num_base_bdevs_discovered": 2, 00:17:02.405 "num_base_bdevs_operational": 2, 00:17:02.405 "base_bdevs_list": [ 00:17:02.405 { 00:17:02.405 "name": "pt1", 00:17:02.405 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.405 "is_configured": true, 00:17:02.405 "data_offset": 256, 00:17:02.405 "data_size": 7936 00:17:02.405 }, 00:17:02.405 { 00:17:02.405 "name": "pt2", 00:17:02.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.405 "is_configured": true, 00:17:02.405 "data_offset": 256, 00:17:02.405 "data_size": 7936 00:17:02.405 } 00:17:02.405 ] 00:17:02.405 } 00:17:02.405 } 00:17:02.405 }' 00:17:02.405 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:02.664 pt2' 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.664 [2024-10-13 02:31:21.267863] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=438f3f58-9f6c-4d1d-956d-246384cbe44e 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 438f3f58-9f6c-4d1d-956d-246384cbe44e ']' 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.664 [2024-10-13 02:31:21.311510] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.664 [2024-10-13 02:31:21.311543] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.664 [2024-10-13 02:31:21.311636] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.664 [2024-10-13 02:31:21.311709] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.664 [2024-10-13 02:31:21.311720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.664 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.924 [2024-10-13 02:31:21.435318] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:02.924 [2024-10-13 02:31:21.437275] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:02.924 [2024-10-13 02:31:21.437339] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:02.924 [2024-10-13 02:31:21.437395] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:02.924 [2024-10-13 02:31:21.437413] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.924 [2024-10-13 02:31:21.437425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:17:02.924 request: 00:17:02.924 { 00:17:02.924 "name": "raid_bdev1", 00:17:02.924 "raid_level": "raid1", 00:17:02.924 "base_bdevs": [ 00:17:02.924 "malloc1", 00:17:02.924 "malloc2" 00:17:02.924 ], 00:17:02.924 "superblock": false, 00:17:02.924 "method": "bdev_raid_create", 00:17:02.924 "req_id": 1 00:17:02.924 } 00:17:02.924 Got JSON-RPC error response 00:17:02.924 response: 00:17:02.924 { 00:17:02.924 "code": -17, 00:17:02.924 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:02.924 } 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.924 [2024-10-13 02:31:21.495185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:02.924 [2024-10-13 02:31:21.495335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.924 [2024-10-13 02:31:21.495369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:02.924 [2024-10-13 02:31:21.495380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.924 [2024-10-13 02:31:21.497709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.924 [2024-10-13 02:31:21.497815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:02.924 [2024-10-13 02:31:21.497908] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:02.924 [2024-10-13 02:31:21.497963] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:02.924 pt1 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.924 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.925 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.925 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.925 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.925 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.925 "name": "raid_bdev1", 00:17:02.925 "uuid": "438f3f58-9f6c-4d1d-956d-246384cbe44e", 00:17:02.925 "strip_size_kb": 0, 00:17:02.925 "state": "configuring", 00:17:02.925 "raid_level": "raid1", 00:17:02.925 "superblock": true, 00:17:02.925 "num_base_bdevs": 2, 00:17:02.925 "num_base_bdevs_discovered": 1, 00:17:02.925 "num_base_bdevs_operational": 2, 00:17:02.925 "base_bdevs_list": [ 00:17:02.925 { 00:17:02.925 "name": "pt1", 00:17:02.925 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.925 "is_configured": true, 00:17:02.925 "data_offset": 256, 00:17:02.925 "data_size": 7936 00:17:02.925 }, 00:17:02.925 { 00:17:02.925 "name": null, 00:17:02.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.925 "is_configured": false, 00:17:02.925 "data_offset": 256, 00:17:02.925 "data_size": 7936 00:17:02.925 } 00:17:02.925 ] 00:17:02.925 }' 00:17:02.925 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.925 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.492 [2024-10-13 02:31:21.954419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:03.492 [2024-10-13 02:31:21.954574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.492 [2024-10-13 02:31:21.954618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:03.492 [2024-10-13 02:31:21.954648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.492 [2024-10-13 02:31:21.954939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.492 [2024-10-13 02:31:21.954998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:03.492 [2024-10-13 02:31:21.955092] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:03.492 [2024-10-13 02:31:21.955157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:03.492 [2024-10-13 02:31:21.955295] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:17:03.492 [2024-10-13 02:31:21.955336] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:03.492 [2024-10-13 02:31:21.955449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:17:03.492 [2024-10-13 02:31:21.955579] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:17:03.492 [2024-10-13 02:31:21.955625] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:17:03.492 [2024-10-13 02:31:21.955749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.492 pt2 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.492 02:31:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.492 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.492 "name": "raid_bdev1", 00:17:03.492 "uuid": "438f3f58-9f6c-4d1d-956d-246384cbe44e", 00:17:03.492 "strip_size_kb": 0, 00:17:03.492 "state": "online", 00:17:03.492 "raid_level": "raid1", 00:17:03.492 "superblock": true, 00:17:03.492 "num_base_bdevs": 2, 00:17:03.492 "num_base_bdevs_discovered": 2, 00:17:03.492 "num_base_bdevs_operational": 2, 00:17:03.492 "base_bdevs_list": [ 00:17:03.492 { 00:17:03.492 "name": "pt1", 00:17:03.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.492 "is_configured": true, 00:17:03.492 "data_offset": 256, 00:17:03.492 "data_size": 7936 00:17:03.492 }, 00:17:03.492 { 00:17:03.492 "name": "pt2", 00:17:03.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.492 "is_configured": true, 00:17:03.493 "data_offset": 256, 00:17:03.493 "data_size": 7936 00:17:03.493 } 00:17:03.493 ] 00:17:03.493 }' 00:17:03.493 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.493 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:04.060 [2024-10-13 02:31:22.445903] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:04.060 "name": "raid_bdev1", 00:17:04.060 "aliases": [ 00:17:04.060 "438f3f58-9f6c-4d1d-956d-246384cbe44e" 00:17:04.060 ], 00:17:04.060 "product_name": "Raid Volume", 00:17:04.060 "block_size": 4096, 00:17:04.060 "num_blocks": 7936, 00:17:04.060 "uuid": "438f3f58-9f6c-4d1d-956d-246384cbe44e", 00:17:04.060 "md_size": 32, 00:17:04.060 "md_interleave": false, 00:17:04.060 "dif_type": 0, 00:17:04.060 "assigned_rate_limits": { 00:17:04.060 "rw_ios_per_sec": 0, 00:17:04.060 "rw_mbytes_per_sec": 0, 00:17:04.060 "r_mbytes_per_sec": 0, 00:17:04.060 "w_mbytes_per_sec": 0 00:17:04.060 }, 00:17:04.060 "claimed": false, 00:17:04.060 "zoned": false, 00:17:04.060 "supported_io_types": { 00:17:04.060 "read": true, 00:17:04.060 "write": true, 00:17:04.060 "unmap": false, 00:17:04.060 "flush": false, 00:17:04.060 "reset": true, 00:17:04.060 "nvme_admin": false, 00:17:04.060 "nvme_io": false, 00:17:04.060 "nvme_io_md": false, 00:17:04.060 "write_zeroes": true, 00:17:04.060 "zcopy": false, 00:17:04.060 "get_zone_info": false, 00:17:04.060 "zone_management": false, 00:17:04.060 "zone_append": false, 00:17:04.060 "compare": false, 00:17:04.060 "compare_and_write": false, 00:17:04.060 "abort": false, 00:17:04.060 "seek_hole": false, 00:17:04.060 "seek_data": false, 00:17:04.060 "copy": false, 00:17:04.060 "nvme_iov_md": false 00:17:04.060 }, 00:17:04.060 "memory_domains": [ 00:17:04.060 { 00:17:04.060 "dma_device_id": "system", 00:17:04.060 "dma_device_type": 1 00:17:04.060 }, 00:17:04.060 { 00:17:04.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.060 "dma_device_type": 2 00:17:04.060 }, 00:17:04.060 { 00:17:04.060 "dma_device_id": "system", 00:17:04.060 "dma_device_type": 1 00:17:04.060 }, 00:17:04.060 { 00:17:04.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.060 "dma_device_type": 2 00:17:04.060 } 00:17:04.060 ], 00:17:04.060 "driver_specific": { 00:17:04.060 "raid": { 00:17:04.060 "uuid": "438f3f58-9f6c-4d1d-956d-246384cbe44e", 00:17:04.060 "strip_size_kb": 0, 00:17:04.060 "state": "online", 00:17:04.060 "raid_level": "raid1", 00:17:04.060 "superblock": true, 00:17:04.060 "num_base_bdevs": 2, 00:17:04.060 "num_base_bdevs_discovered": 2, 00:17:04.060 "num_base_bdevs_operational": 2, 00:17:04.060 "base_bdevs_list": [ 00:17:04.060 { 00:17:04.060 "name": "pt1", 00:17:04.060 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:04.060 "is_configured": true, 00:17:04.060 "data_offset": 256, 00:17:04.060 "data_size": 7936 00:17:04.060 }, 00:17:04.060 { 00:17:04.060 "name": "pt2", 00:17:04.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.060 "is_configured": true, 00:17:04.060 "data_offset": 256, 00:17:04.060 "data_size": 7936 00:17:04.060 } 00:17:04.060 ] 00:17:04.060 } 00:17:04.060 } 00:17:04.060 }' 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:04.060 pt2' 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.060 [2024-10-13 02:31:22.653509] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 438f3f58-9f6c-4d1d-956d-246384cbe44e '!=' 438f3f58-9f6c-4d1d-956d-246384cbe44e ']' 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.060 [2024-10-13 02:31:22.697218] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.060 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.061 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.061 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.061 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:04.061 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.061 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.061 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.061 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.061 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.061 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.061 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.061 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.061 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.320 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.320 "name": "raid_bdev1", 00:17:04.320 "uuid": "438f3f58-9f6c-4d1d-956d-246384cbe44e", 00:17:04.320 "strip_size_kb": 0, 00:17:04.320 "state": "online", 00:17:04.320 "raid_level": "raid1", 00:17:04.320 "superblock": true, 00:17:04.320 "num_base_bdevs": 2, 00:17:04.320 "num_base_bdevs_discovered": 1, 00:17:04.320 "num_base_bdevs_operational": 1, 00:17:04.320 "base_bdevs_list": [ 00:17:04.320 { 00:17:04.320 "name": null, 00:17:04.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.320 "is_configured": false, 00:17:04.320 "data_offset": 0, 00:17:04.320 "data_size": 7936 00:17:04.320 }, 00:17:04.320 { 00:17:04.320 "name": "pt2", 00:17:04.320 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.320 "is_configured": true, 00:17:04.320 "data_offset": 256, 00:17:04.320 "data_size": 7936 00:17:04.320 } 00:17:04.320 ] 00:17:04.320 }' 00:17:04.320 02:31:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.320 02:31:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.578 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:04.578 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.578 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.578 [2024-10-13 02:31:23.160350] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.578 [2024-10-13 02:31:23.160382] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.578 [2024-10-13 02:31:23.160471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.578 [2024-10-13 02:31:23.160526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.578 [2024-10-13 02:31:23.160535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:17:04.578 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.578 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:04.578 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.578 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.578 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.578 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.578 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:04.578 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:04.578 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:04.578 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:04.578 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:04.578 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.578 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.578 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.578 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.579 [2024-10-13 02:31:23.220227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:04.579 [2024-10-13 02:31:23.220296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.579 [2024-10-13 02:31:23.220337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:17:04.579 [2024-10-13 02:31:23.220350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.579 [2024-10-13 02:31:23.222354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.579 [2024-10-13 02:31:23.222389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:04.579 [2024-10-13 02:31:23.222466] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:04.579 [2024-10-13 02:31:23.222513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:04.579 [2024-10-13 02:31:23.222596] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:17:04.579 [2024-10-13 02:31:23.222607] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:04.579 [2024-10-13 02:31:23.222681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:17:04.579 [2024-10-13 02:31:23.222759] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:17:04.579 [2024-10-13 02:31:23.222772] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:17:04.579 [2024-10-13 02:31:23.222844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.579 pt2 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.579 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.838 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.838 "name": "raid_bdev1", 00:17:04.838 "uuid": "438f3f58-9f6c-4d1d-956d-246384cbe44e", 00:17:04.838 "strip_size_kb": 0, 00:17:04.838 "state": "online", 00:17:04.838 "raid_level": "raid1", 00:17:04.838 "superblock": true, 00:17:04.838 "num_base_bdevs": 2, 00:17:04.838 "num_base_bdevs_discovered": 1, 00:17:04.838 "num_base_bdevs_operational": 1, 00:17:04.838 "base_bdevs_list": [ 00:17:04.838 { 00:17:04.838 "name": null, 00:17:04.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.838 "is_configured": false, 00:17:04.838 "data_offset": 256, 00:17:04.838 "data_size": 7936 00:17:04.838 }, 00:17:04.838 { 00:17:04.838 "name": "pt2", 00:17:04.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.838 "is_configured": true, 00:17:04.838 "data_offset": 256, 00:17:04.838 "data_size": 7936 00:17:04.838 } 00:17:04.838 ] 00:17:04.838 }' 00:17:04.838 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.838 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.097 [2024-10-13 02:31:23.687461] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:05.097 [2024-10-13 02:31:23.687497] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.097 [2024-10-13 02:31:23.687595] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.097 [2024-10-13 02:31:23.687646] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.097 [2024-10-13 02:31:23.687670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.097 [2024-10-13 02:31:23.751367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:05.097 [2024-10-13 02:31:23.751442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.097 [2024-10-13 02:31:23.751464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:17:05.097 [2024-10-13 02:31:23.751478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.097 [2024-10-13 02:31:23.753542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.097 [2024-10-13 02:31:23.753585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:05.097 [2024-10-13 02:31:23.753641] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:05.097 [2024-10-13 02:31:23.753691] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:05.097 [2024-10-13 02:31:23.753811] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:05.097 [2024-10-13 02:31:23.753832] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:05.097 [2024-10-13 02:31:23.753851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:17:05.097 [2024-10-13 02:31:23.753903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:05.097 [2024-10-13 02:31:23.753970] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:17:05.097 [2024-10-13 02:31:23.753981] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:05.097 [2024-10-13 02:31:23.754043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:17:05.097 [2024-10-13 02:31:23.754126] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:17:05.097 [2024-10-13 02:31:23.754144] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:17:05.097 [2024-10-13 02:31:23.754226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.097 pt1 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.097 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.364 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.364 "name": "raid_bdev1", 00:17:05.364 "uuid": "438f3f58-9f6c-4d1d-956d-246384cbe44e", 00:17:05.364 "strip_size_kb": 0, 00:17:05.364 "state": "online", 00:17:05.364 "raid_level": "raid1", 00:17:05.364 "superblock": true, 00:17:05.364 "num_base_bdevs": 2, 00:17:05.364 "num_base_bdevs_discovered": 1, 00:17:05.364 "num_base_bdevs_operational": 1, 00:17:05.364 "base_bdevs_list": [ 00:17:05.364 { 00:17:05.364 "name": null, 00:17:05.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.364 "is_configured": false, 00:17:05.364 "data_offset": 256, 00:17:05.364 "data_size": 7936 00:17:05.364 }, 00:17:05.364 { 00:17:05.364 "name": "pt2", 00:17:05.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.364 "is_configured": true, 00:17:05.364 "data_offset": 256, 00:17:05.364 "data_size": 7936 00:17:05.364 } 00:17:05.364 ] 00:17:05.364 }' 00:17:05.364 02:31:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.364 02:31:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.631 02:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:05.631 02:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:05.631 02:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.631 02:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.631 02:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.631 02:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:05.631 02:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:05.631 02:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:05.631 02:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.631 02:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.631 [2024-10-13 02:31:24.202903] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.631 02:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.631 02:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 438f3f58-9f6c-4d1d-956d-246384cbe44e '!=' 438f3f58-9f6c-4d1d-956d-246384cbe44e ']' 00:17:05.632 02:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 97723 00:17:05.632 02:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97723 ']' 00:17:05.632 02:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 97723 00:17:05.632 02:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:17:05.632 02:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:05.632 02:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97723 00:17:05.632 killing process with pid 97723 00:17:05.632 02:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:05.632 02:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:05.632 02:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97723' 00:17:05.632 02:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 97723 00:17:05.632 [2024-10-13 02:31:24.287589] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:05.632 02:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 97723 00:17:05.632 [2024-10-13 02:31:24.287706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.632 [2024-10-13 02:31:24.287762] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.632 [2024-10-13 02:31:24.287771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:17:05.632 [2024-10-13 02:31:24.312450] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.890 02:31:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:05.890 00:17:05.890 real 0m5.026s 00:17:05.890 user 0m8.185s 00:17:05.890 sys 0m1.106s 00:17:05.890 02:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:05.890 02:31:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.890 ************************************ 00:17:05.890 END TEST raid_superblock_test_md_separate 00:17:05.890 ************************************ 00:17:06.148 02:31:24 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:06.148 02:31:24 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:06.148 02:31:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:06.148 02:31:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:06.148 02:31:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.148 ************************************ 00:17:06.148 START TEST raid_rebuild_test_sb_md_separate 00:17:06.148 ************************************ 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=98041 00:17:06.148 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 98041 00:17:06.149 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:06.149 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98041 ']' 00:17:06.149 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.149 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.149 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.149 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.149 02:31:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.149 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:06.149 Zero copy mechanism will not be used. 00:17:06.149 [2024-10-13 02:31:24.725981] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:06.149 [2024-10-13 02:31:24.726095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98041 ] 00:17:06.407 [2024-10-13 02:31:24.869717] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.407 [2024-10-13 02:31:24.920936] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.407 [2024-10-13 02:31:24.963100] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.407 [2024-10-13 02:31:24.963142] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.974 BaseBdev1_malloc 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.974 [2024-10-13 02:31:25.590027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:06.974 [2024-10-13 02:31:25.590092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.974 [2024-10-13 02:31:25.590119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:06.974 [2024-10-13 02:31:25.590135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.974 [2024-10-13 02:31:25.592155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.974 [2024-10-13 02:31:25.592197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:06.974 BaseBdev1 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.974 BaseBdev2_malloc 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.974 [2024-10-13 02:31:25.627157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:06.974 [2024-10-13 02:31:25.627233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.974 [2024-10-13 02:31:25.627260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:06.974 [2024-10-13 02:31:25.627269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.974 [2024-10-13 02:31:25.629302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.974 [2024-10-13 02:31:25.629345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:06.974 BaseBdev2 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.974 spare_malloc 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.974 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.233 spare_delay 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.233 [2024-10-13 02:31:25.668393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:07.233 [2024-10-13 02:31:25.668463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.233 [2024-10-13 02:31:25.668490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:07.233 [2024-10-13 02:31:25.668500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.233 [2024-10-13 02:31:25.670543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.233 [2024-10-13 02:31:25.670589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:07.233 spare 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.233 [2024-10-13 02:31:25.680474] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.233 [2024-10-13 02:31:25.682423] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:07.233 [2024-10-13 02:31:25.682605] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:17:07.233 [2024-10-13 02:31:25.682617] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:07.233 [2024-10-13 02:31:25.682721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:17:07.233 [2024-10-13 02:31:25.682850] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:17:07.233 [2024-10-13 02:31:25.682880] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:17:07.233 [2024-10-13 02:31:25.682997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.233 "name": "raid_bdev1", 00:17:07.233 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:07.233 "strip_size_kb": 0, 00:17:07.233 "state": "online", 00:17:07.233 "raid_level": "raid1", 00:17:07.233 "superblock": true, 00:17:07.233 "num_base_bdevs": 2, 00:17:07.233 "num_base_bdevs_discovered": 2, 00:17:07.233 "num_base_bdevs_operational": 2, 00:17:07.233 "base_bdevs_list": [ 00:17:07.233 { 00:17:07.233 "name": "BaseBdev1", 00:17:07.233 "uuid": "7831c343-54ef-5f27-a420-22261d641466", 00:17:07.233 "is_configured": true, 00:17:07.233 "data_offset": 256, 00:17:07.233 "data_size": 7936 00:17:07.233 }, 00:17:07.233 { 00:17:07.233 "name": "BaseBdev2", 00:17:07.233 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:07.233 "is_configured": true, 00:17:07.233 "data_offset": 256, 00:17:07.233 "data_size": 7936 00:17:07.233 } 00:17:07.233 ] 00:17:07.233 }' 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.233 02:31:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.491 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:07.491 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.491 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.491 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.491 [2024-10-13 02:31:26.088143] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:07.492 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:07.750 [2024-10-13 02:31:26.347466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:17:07.750 /dev/nbd0 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.750 1+0 records in 00:17:07.750 1+0 records out 00:17:07.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370901 s, 11.0 MB/s 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:07.750 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:08.317 7936+0 records in 00:17:08.317 7936+0 records out 00:17:08.317 32505856 bytes (33 MB, 31 MiB) copied, 0.563451 s, 57.7 MB/s 00:17:08.317 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:08.317 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:08.317 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:08.317 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:08.317 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:08.317 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.317 02:31:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:08.575 [2024-10-13 02:31:27.218697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.575 [2024-10-13 02:31:27.234788] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.575 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.576 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.834 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.834 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.834 "name": "raid_bdev1", 00:17:08.834 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:08.834 "strip_size_kb": 0, 00:17:08.834 "state": "online", 00:17:08.834 "raid_level": "raid1", 00:17:08.834 "superblock": true, 00:17:08.834 "num_base_bdevs": 2, 00:17:08.834 "num_base_bdevs_discovered": 1, 00:17:08.834 "num_base_bdevs_operational": 1, 00:17:08.834 "base_bdevs_list": [ 00:17:08.834 { 00:17:08.834 "name": null, 00:17:08.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.834 "is_configured": false, 00:17:08.834 "data_offset": 0, 00:17:08.834 "data_size": 7936 00:17:08.834 }, 00:17:08.834 { 00:17:08.834 "name": "BaseBdev2", 00:17:08.834 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:08.834 "is_configured": true, 00:17:08.834 "data_offset": 256, 00:17:08.834 "data_size": 7936 00:17:08.834 } 00:17:08.834 ] 00:17:08.834 }' 00:17:08.834 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.834 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.093 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:09.093 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.093 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.093 [2024-10-13 02:31:27.713997] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.093 [2024-10-13 02:31:27.715834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:17:09.093 [2024-10-13 02:31:27.717761] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:09.093 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.093 02:31:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.470 "name": "raid_bdev1", 00:17:10.470 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:10.470 "strip_size_kb": 0, 00:17:10.470 "state": "online", 00:17:10.470 "raid_level": "raid1", 00:17:10.470 "superblock": true, 00:17:10.470 "num_base_bdevs": 2, 00:17:10.470 "num_base_bdevs_discovered": 2, 00:17:10.470 "num_base_bdevs_operational": 2, 00:17:10.470 "process": { 00:17:10.470 "type": "rebuild", 00:17:10.470 "target": "spare", 00:17:10.470 "progress": { 00:17:10.470 "blocks": 2560, 00:17:10.470 "percent": 32 00:17:10.470 } 00:17:10.470 }, 00:17:10.470 "base_bdevs_list": [ 00:17:10.470 { 00:17:10.470 "name": "spare", 00:17:10.470 "uuid": "9ae5b3bb-20d4-56b9-b8a9-a6ef1e487eab", 00:17:10.470 "is_configured": true, 00:17:10.470 "data_offset": 256, 00:17:10.470 "data_size": 7936 00:17:10.470 }, 00:17:10.470 { 00:17:10.470 "name": "BaseBdev2", 00:17:10.470 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:10.470 "is_configured": true, 00:17:10.470 "data_offset": 256, 00:17:10.470 "data_size": 7936 00:17:10.470 } 00:17:10.470 ] 00:17:10.470 }' 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.470 [2024-10-13 02:31:28.884645] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.470 [2024-10-13 02:31:28.923704] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:10.470 [2024-10-13 02:31:28.923788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.470 [2024-10-13 02:31:28.923811] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.470 [2024-10-13 02:31:28.923818] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.470 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:10.471 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.471 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.471 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.471 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.471 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:10.471 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.471 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.471 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.471 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.471 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.471 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.471 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.471 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.471 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.471 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.471 "name": "raid_bdev1", 00:17:10.471 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:10.471 "strip_size_kb": 0, 00:17:10.471 "state": "online", 00:17:10.471 "raid_level": "raid1", 00:17:10.471 "superblock": true, 00:17:10.471 "num_base_bdevs": 2, 00:17:10.471 "num_base_bdevs_discovered": 1, 00:17:10.471 "num_base_bdevs_operational": 1, 00:17:10.471 "base_bdevs_list": [ 00:17:10.471 { 00:17:10.471 "name": null, 00:17:10.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.471 "is_configured": false, 00:17:10.471 "data_offset": 0, 00:17:10.471 "data_size": 7936 00:17:10.471 }, 00:17:10.471 { 00:17:10.471 "name": "BaseBdev2", 00:17:10.471 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:10.471 "is_configured": true, 00:17:10.471 "data_offset": 256, 00:17:10.471 "data_size": 7936 00:17:10.471 } 00:17:10.471 ] 00:17:10.471 }' 00:17:10.471 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.471 02:31:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.730 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.730 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.730 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.730 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.730 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.730 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.730 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.730 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.730 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.730 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.730 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.730 "name": "raid_bdev1", 00:17:10.730 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:10.730 "strip_size_kb": 0, 00:17:10.730 "state": "online", 00:17:10.730 "raid_level": "raid1", 00:17:10.730 "superblock": true, 00:17:10.730 "num_base_bdevs": 2, 00:17:10.730 "num_base_bdevs_discovered": 1, 00:17:10.730 "num_base_bdevs_operational": 1, 00:17:10.730 "base_bdevs_list": [ 00:17:10.730 { 00:17:10.730 "name": null, 00:17:10.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.730 "is_configured": false, 00:17:10.730 "data_offset": 0, 00:17:10.730 "data_size": 7936 00:17:10.730 }, 00:17:10.730 { 00:17:10.730 "name": "BaseBdev2", 00:17:10.730 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:10.730 "is_configured": true, 00:17:10.730 "data_offset": 256, 00:17:10.730 "data_size": 7936 00:17:10.730 } 00:17:10.730 ] 00:17:10.730 }' 00:17:10.730 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.989 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.989 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.989 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.989 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:10.989 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.989 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.989 [2024-10-13 02:31:29.486260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:10.989 [2024-10-13 02:31:29.488164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:17:10.989 [2024-10-13 02:31:29.490128] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:10.989 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.989 02:31:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:11.925 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.925 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.925 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.925 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.925 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.925 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.925 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.925 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.925 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.925 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.925 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.925 "name": "raid_bdev1", 00:17:11.925 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:11.925 "strip_size_kb": 0, 00:17:11.925 "state": "online", 00:17:11.925 "raid_level": "raid1", 00:17:11.925 "superblock": true, 00:17:11.925 "num_base_bdevs": 2, 00:17:11.925 "num_base_bdevs_discovered": 2, 00:17:11.925 "num_base_bdevs_operational": 2, 00:17:11.925 "process": { 00:17:11.925 "type": "rebuild", 00:17:11.925 "target": "spare", 00:17:11.925 "progress": { 00:17:11.925 "blocks": 2560, 00:17:11.925 "percent": 32 00:17:11.925 } 00:17:11.925 }, 00:17:11.925 "base_bdevs_list": [ 00:17:11.925 { 00:17:11.925 "name": "spare", 00:17:11.925 "uuid": "9ae5b3bb-20d4-56b9-b8a9-a6ef1e487eab", 00:17:11.925 "is_configured": true, 00:17:11.925 "data_offset": 256, 00:17:11.925 "data_size": 7936 00:17:11.925 }, 00:17:11.925 { 00:17:11.925 "name": "BaseBdev2", 00:17:11.925 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:11.925 "is_configured": true, 00:17:11.925 "data_offset": 256, 00:17:11.925 "data_size": 7936 00:17:11.925 } 00:17:11.925 ] 00:17:11.925 }' 00:17:11.925 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.925 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.925 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:12.184 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=598 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.184 "name": "raid_bdev1", 00:17:12.184 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:12.184 "strip_size_kb": 0, 00:17:12.184 "state": "online", 00:17:12.184 "raid_level": "raid1", 00:17:12.184 "superblock": true, 00:17:12.184 "num_base_bdevs": 2, 00:17:12.184 "num_base_bdevs_discovered": 2, 00:17:12.184 "num_base_bdevs_operational": 2, 00:17:12.184 "process": { 00:17:12.184 "type": "rebuild", 00:17:12.184 "target": "spare", 00:17:12.184 "progress": { 00:17:12.184 "blocks": 2816, 00:17:12.184 "percent": 35 00:17:12.184 } 00:17:12.184 }, 00:17:12.184 "base_bdevs_list": [ 00:17:12.184 { 00:17:12.184 "name": "spare", 00:17:12.184 "uuid": "9ae5b3bb-20d4-56b9-b8a9-a6ef1e487eab", 00:17:12.184 "is_configured": true, 00:17:12.184 "data_offset": 256, 00:17:12.184 "data_size": 7936 00:17:12.184 }, 00:17:12.184 { 00:17:12.184 "name": "BaseBdev2", 00:17:12.184 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:12.184 "is_configured": true, 00:17:12.184 "data_offset": 256, 00:17:12.184 "data_size": 7936 00:17:12.184 } 00:17:12.184 ] 00:17:12.184 }' 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.184 02:31:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.120 02:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.120 02:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.120 02:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.120 02:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.120 02:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.120 02:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.120 02:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.120 02:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.120 02:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.120 02:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.379 02:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.379 02:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.379 "name": "raid_bdev1", 00:17:13.379 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:13.379 "strip_size_kb": 0, 00:17:13.379 "state": "online", 00:17:13.379 "raid_level": "raid1", 00:17:13.379 "superblock": true, 00:17:13.379 "num_base_bdevs": 2, 00:17:13.379 "num_base_bdevs_discovered": 2, 00:17:13.379 "num_base_bdevs_operational": 2, 00:17:13.379 "process": { 00:17:13.379 "type": "rebuild", 00:17:13.379 "target": "spare", 00:17:13.379 "progress": { 00:17:13.379 "blocks": 5888, 00:17:13.379 "percent": 74 00:17:13.379 } 00:17:13.379 }, 00:17:13.379 "base_bdevs_list": [ 00:17:13.379 { 00:17:13.379 "name": "spare", 00:17:13.379 "uuid": "9ae5b3bb-20d4-56b9-b8a9-a6ef1e487eab", 00:17:13.379 "is_configured": true, 00:17:13.379 "data_offset": 256, 00:17:13.379 "data_size": 7936 00:17:13.379 }, 00:17:13.379 { 00:17:13.379 "name": "BaseBdev2", 00:17:13.379 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:13.379 "is_configured": true, 00:17:13.379 "data_offset": 256, 00:17:13.379 "data_size": 7936 00:17:13.379 } 00:17:13.379 ] 00:17:13.379 }' 00:17:13.379 02:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.379 02:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.379 02:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.379 02:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.379 02:31:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.946 [2024-10-13 02:31:32.603912] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:13.946 [2024-10-13 02:31:32.604105] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:13.946 [2024-10-13 02:31:32.604277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.515 02:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.515 02:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.515 02:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.515 02:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.515 02:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.515 02:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.515 02:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.515 02:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.515 02:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.515 02:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.515 02:31:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.515 "name": "raid_bdev1", 00:17:14.515 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:14.515 "strip_size_kb": 0, 00:17:14.515 "state": "online", 00:17:14.515 "raid_level": "raid1", 00:17:14.515 "superblock": true, 00:17:14.515 "num_base_bdevs": 2, 00:17:14.515 "num_base_bdevs_discovered": 2, 00:17:14.515 "num_base_bdevs_operational": 2, 00:17:14.515 "base_bdevs_list": [ 00:17:14.515 { 00:17:14.515 "name": "spare", 00:17:14.515 "uuid": "9ae5b3bb-20d4-56b9-b8a9-a6ef1e487eab", 00:17:14.515 "is_configured": true, 00:17:14.515 "data_offset": 256, 00:17:14.515 "data_size": 7936 00:17:14.515 }, 00:17:14.515 { 00:17:14.515 "name": "BaseBdev2", 00:17:14.515 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:14.515 "is_configured": true, 00:17:14.515 "data_offset": 256, 00:17:14.515 "data_size": 7936 00:17:14.515 } 00:17:14.515 ] 00:17:14.515 }' 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.515 "name": "raid_bdev1", 00:17:14.515 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:14.515 "strip_size_kb": 0, 00:17:14.515 "state": "online", 00:17:14.515 "raid_level": "raid1", 00:17:14.515 "superblock": true, 00:17:14.515 "num_base_bdevs": 2, 00:17:14.515 "num_base_bdevs_discovered": 2, 00:17:14.515 "num_base_bdevs_operational": 2, 00:17:14.515 "base_bdevs_list": [ 00:17:14.515 { 00:17:14.515 "name": "spare", 00:17:14.515 "uuid": "9ae5b3bb-20d4-56b9-b8a9-a6ef1e487eab", 00:17:14.515 "is_configured": true, 00:17:14.515 "data_offset": 256, 00:17:14.515 "data_size": 7936 00:17:14.515 }, 00:17:14.515 { 00:17:14.515 "name": "BaseBdev2", 00:17:14.515 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:14.515 "is_configured": true, 00:17:14.515 "data_offset": 256, 00:17:14.515 "data_size": 7936 00:17:14.515 } 00:17:14.515 ] 00:17:14.515 }' 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.515 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.774 "name": "raid_bdev1", 00:17:14.774 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:14.774 "strip_size_kb": 0, 00:17:14.774 "state": "online", 00:17:14.774 "raid_level": "raid1", 00:17:14.774 "superblock": true, 00:17:14.774 "num_base_bdevs": 2, 00:17:14.774 "num_base_bdevs_discovered": 2, 00:17:14.774 "num_base_bdevs_operational": 2, 00:17:14.774 "base_bdevs_list": [ 00:17:14.774 { 00:17:14.774 "name": "spare", 00:17:14.774 "uuid": "9ae5b3bb-20d4-56b9-b8a9-a6ef1e487eab", 00:17:14.774 "is_configured": true, 00:17:14.774 "data_offset": 256, 00:17:14.774 "data_size": 7936 00:17:14.774 }, 00:17:14.774 { 00:17:14.774 "name": "BaseBdev2", 00:17:14.774 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:14.774 "is_configured": true, 00:17:14.774 "data_offset": 256, 00:17:14.774 "data_size": 7936 00:17:14.774 } 00:17:14.774 ] 00:17:14.774 }' 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.774 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.034 [2024-10-13 02:31:33.645714] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:15.034 [2024-10-13 02:31:33.645804] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.034 [2024-10-13 02:31:33.645926] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.034 [2024-10-13 02:31:33.646051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.034 [2024-10-13 02:31:33.646130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:15.034 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:15.292 /dev/nbd0 00:17:15.292 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:15.292 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:15.292 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:15.292 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:17:15.292 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:15.292 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:15.293 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:15.293 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:17:15.293 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:15.293 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:15.293 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:15.293 1+0 records in 00:17:15.293 1+0 records out 00:17:15.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031242 s, 13.1 MB/s 00:17:15.293 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.293 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:17:15.293 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.551 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:15.551 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:17:15.551 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:15.551 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:15.551 02:31:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:15.551 /dev/nbd1 00:17:15.551 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:15.551 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:15.552 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:15.552 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:17:15.552 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:15.552 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:15.552 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:15.552 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:17:15.552 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:15.552 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:15.552 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:15.552 1+0 records in 00:17:15.552 1+0 records out 00:17:15.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054193 s, 7.6 MB/s 00:17:15.552 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.552 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:17:15.552 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.552 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:15.552 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:17:15.552 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:15.552 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:15.552 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:15.810 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:15.810 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:15.810 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:15.810 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:15.811 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:15.811 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:15.811 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:16.069 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:16.069 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:16.069 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:16.069 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:16.069 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:16.069 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:16.069 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:16.070 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:16.070 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:16.070 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:16.328 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:16.328 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:16.328 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:16.328 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:16.328 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:16.328 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.329 [2024-10-13 02:31:34.793390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:16.329 [2024-10-13 02:31:34.793503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.329 [2024-10-13 02:31:34.793542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:16.329 [2024-10-13 02:31:34.793574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.329 [2024-10-13 02:31:34.795643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.329 [2024-10-13 02:31:34.795716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:16.329 [2024-10-13 02:31:34.795804] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:16.329 [2024-10-13 02:31:34.795894] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:16.329 [2024-10-13 02:31:34.796020] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:16.329 spare 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.329 [2024-10-13 02:31:34.895966] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:17:16.329 [2024-10-13 02:31:34.896106] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:16.329 [2024-10-13 02:31:34.896323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:17:16.329 [2024-10-13 02:31:34.896507] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:17:16.329 [2024-10-13 02:31:34.896548] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:17:16.329 [2024-10-13 02:31:34.896718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.329 "name": "raid_bdev1", 00:17:16.329 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:16.329 "strip_size_kb": 0, 00:17:16.329 "state": "online", 00:17:16.329 "raid_level": "raid1", 00:17:16.329 "superblock": true, 00:17:16.329 "num_base_bdevs": 2, 00:17:16.329 "num_base_bdevs_discovered": 2, 00:17:16.329 "num_base_bdevs_operational": 2, 00:17:16.329 "base_bdevs_list": [ 00:17:16.329 { 00:17:16.329 "name": "spare", 00:17:16.329 "uuid": "9ae5b3bb-20d4-56b9-b8a9-a6ef1e487eab", 00:17:16.329 "is_configured": true, 00:17:16.329 "data_offset": 256, 00:17:16.329 "data_size": 7936 00:17:16.329 }, 00:17:16.329 { 00:17:16.329 "name": "BaseBdev2", 00:17:16.329 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:16.329 "is_configured": true, 00:17:16.329 "data_offset": 256, 00:17:16.329 "data_size": 7936 00:17:16.329 } 00:17:16.329 ] 00:17:16.329 }' 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.329 02:31:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.897 "name": "raid_bdev1", 00:17:16.897 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:16.897 "strip_size_kb": 0, 00:17:16.897 "state": "online", 00:17:16.897 "raid_level": "raid1", 00:17:16.897 "superblock": true, 00:17:16.897 "num_base_bdevs": 2, 00:17:16.897 "num_base_bdevs_discovered": 2, 00:17:16.897 "num_base_bdevs_operational": 2, 00:17:16.897 "base_bdevs_list": [ 00:17:16.897 { 00:17:16.897 "name": "spare", 00:17:16.897 "uuid": "9ae5b3bb-20d4-56b9-b8a9-a6ef1e487eab", 00:17:16.897 "is_configured": true, 00:17:16.897 "data_offset": 256, 00:17:16.897 "data_size": 7936 00:17:16.897 }, 00:17:16.897 { 00:17:16.897 "name": "BaseBdev2", 00:17:16.897 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:16.897 "is_configured": true, 00:17:16.897 "data_offset": 256, 00:17:16.897 "data_size": 7936 00:17:16.897 } 00:17:16.897 ] 00:17:16.897 }' 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.897 [2024-10-13 02:31:35.528222] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.897 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.156 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.156 "name": "raid_bdev1", 00:17:17.156 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:17.156 "strip_size_kb": 0, 00:17:17.156 "state": "online", 00:17:17.156 "raid_level": "raid1", 00:17:17.156 "superblock": true, 00:17:17.156 "num_base_bdevs": 2, 00:17:17.156 "num_base_bdevs_discovered": 1, 00:17:17.156 "num_base_bdevs_operational": 1, 00:17:17.156 "base_bdevs_list": [ 00:17:17.156 { 00:17:17.156 "name": null, 00:17:17.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.156 "is_configured": false, 00:17:17.156 "data_offset": 0, 00:17:17.156 "data_size": 7936 00:17:17.156 }, 00:17:17.156 { 00:17:17.156 "name": "BaseBdev2", 00:17:17.156 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:17.156 "is_configured": true, 00:17:17.156 "data_offset": 256, 00:17:17.156 "data_size": 7936 00:17:17.156 } 00:17:17.156 ] 00:17:17.156 }' 00:17:17.156 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.156 02:31:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:17.415 02:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:17.415 02:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.415 02:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:17.415 [2024-10-13 02:31:36.011712] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.415 [2024-10-13 02:31:36.012038] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:17.415 [2024-10-13 02:31:36.012100] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:17.415 [2024-10-13 02:31:36.012177] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.415 [2024-10-13 02:31:36.013862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:17:17.415 [2024-10-13 02:31:36.015812] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:17.415 02:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.415 02:31:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:18.351 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.351 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.351 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.351 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.351 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.351 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.351 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.351 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.351 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.610 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.610 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.610 "name": "raid_bdev1", 00:17:18.610 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:18.610 "strip_size_kb": 0, 00:17:18.610 "state": "online", 00:17:18.610 "raid_level": "raid1", 00:17:18.610 "superblock": true, 00:17:18.610 "num_base_bdevs": 2, 00:17:18.610 "num_base_bdevs_discovered": 2, 00:17:18.610 "num_base_bdevs_operational": 2, 00:17:18.610 "process": { 00:17:18.610 "type": "rebuild", 00:17:18.610 "target": "spare", 00:17:18.610 "progress": { 00:17:18.610 "blocks": 2560, 00:17:18.610 "percent": 32 00:17:18.610 } 00:17:18.610 }, 00:17:18.610 "base_bdevs_list": [ 00:17:18.610 { 00:17:18.610 "name": "spare", 00:17:18.610 "uuid": "9ae5b3bb-20d4-56b9-b8a9-a6ef1e487eab", 00:17:18.610 "is_configured": true, 00:17:18.610 "data_offset": 256, 00:17:18.610 "data_size": 7936 00:17:18.610 }, 00:17:18.610 { 00:17:18.610 "name": "BaseBdev2", 00:17:18.610 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:18.610 "is_configured": true, 00:17:18.610 "data_offset": 256, 00:17:18.610 "data_size": 7936 00:17:18.610 } 00:17:18.610 ] 00:17:18.610 }' 00:17:18.610 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.610 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.610 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.610 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.610 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:18.610 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.611 [2024-10-13 02:31:37.168176] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:18.611 [2024-10-13 02:31:37.221131] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:18.611 [2024-10-13 02:31:37.221220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.611 [2024-10-13 02:31:37.221239] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:18.611 [2024-10-13 02:31:37.221247] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.611 "name": "raid_bdev1", 00:17:18.611 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:18.611 "strip_size_kb": 0, 00:17:18.611 "state": "online", 00:17:18.611 "raid_level": "raid1", 00:17:18.611 "superblock": true, 00:17:18.611 "num_base_bdevs": 2, 00:17:18.611 "num_base_bdevs_discovered": 1, 00:17:18.611 "num_base_bdevs_operational": 1, 00:17:18.611 "base_bdevs_list": [ 00:17:18.611 { 00:17:18.611 "name": null, 00:17:18.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.611 "is_configured": false, 00:17:18.611 "data_offset": 0, 00:17:18.611 "data_size": 7936 00:17:18.611 }, 00:17:18.611 { 00:17:18.611 "name": "BaseBdev2", 00:17:18.611 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:18.611 "is_configured": true, 00:17:18.611 "data_offset": 256, 00:17:18.611 "data_size": 7936 00:17:18.611 } 00:17:18.611 ] 00:17:18.611 }' 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.611 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.179 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:19.179 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.179 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.179 [2024-10-13 02:31:37.683881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:19.179 [2024-10-13 02:31:37.684025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.179 [2024-10-13 02:31:37.684074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:19.179 [2024-10-13 02:31:37.684105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.179 [2024-10-13 02:31:37.684361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.179 [2024-10-13 02:31:37.684421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:19.179 [2024-10-13 02:31:37.684516] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:19.179 [2024-10-13 02:31:37.684554] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:19.179 [2024-10-13 02:31:37.684617] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:19.179 [2024-10-13 02:31:37.684668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.179 [2024-10-13 02:31:37.686363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:17:19.179 [2024-10-13 02:31:37.688377] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:19.179 spare 00:17:19.179 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.179 02:31:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:20.113 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.113 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.113 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.113 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.113 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.113 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.113 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.113 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.113 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.113 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.113 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.113 "name": "raid_bdev1", 00:17:20.113 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:20.113 "strip_size_kb": 0, 00:17:20.113 "state": "online", 00:17:20.113 "raid_level": "raid1", 00:17:20.113 "superblock": true, 00:17:20.113 "num_base_bdevs": 2, 00:17:20.113 "num_base_bdevs_discovered": 2, 00:17:20.113 "num_base_bdevs_operational": 2, 00:17:20.113 "process": { 00:17:20.113 "type": "rebuild", 00:17:20.113 "target": "spare", 00:17:20.113 "progress": { 00:17:20.113 "blocks": 2560, 00:17:20.113 "percent": 32 00:17:20.113 } 00:17:20.113 }, 00:17:20.113 "base_bdevs_list": [ 00:17:20.113 { 00:17:20.113 "name": "spare", 00:17:20.113 "uuid": "9ae5b3bb-20d4-56b9-b8a9-a6ef1e487eab", 00:17:20.113 "is_configured": true, 00:17:20.113 "data_offset": 256, 00:17:20.113 "data_size": 7936 00:17:20.113 }, 00:17:20.113 { 00:17:20.113 "name": "BaseBdev2", 00:17:20.113 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:20.113 "is_configured": true, 00:17:20.113 "data_offset": 256, 00:17:20.113 "data_size": 7936 00:17:20.113 } 00:17:20.113 ] 00:17:20.113 }' 00:17:20.113 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.372 [2024-10-13 02:31:38.860125] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:20.372 [2024-10-13 02:31:38.893630] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:20.372 [2024-10-13 02:31:38.893808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.372 [2024-10-13 02:31:38.893845] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:20.372 [2024-10-13 02:31:38.893878] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.372 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.372 "name": "raid_bdev1", 00:17:20.372 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:20.372 "strip_size_kb": 0, 00:17:20.372 "state": "online", 00:17:20.372 "raid_level": "raid1", 00:17:20.372 "superblock": true, 00:17:20.372 "num_base_bdevs": 2, 00:17:20.372 "num_base_bdevs_discovered": 1, 00:17:20.372 "num_base_bdevs_operational": 1, 00:17:20.372 "base_bdevs_list": [ 00:17:20.372 { 00:17:20.372 "name": null, 00:17:20.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.372 "is_configured": false, 00:17:20.372 "data_offset": 0, 00:17:20.373 "data_size": 7936 00:17:20.373 }, 00:17:20.373 { 00:17:20.373 "name": "BaseBdev2", 00:17:20.373 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:20.373 "is_configured": true, 00:17:20.373 "data_offset": 256, 00:17:20.373 "data_size": 7936 00:17:20.373 } 00:17:20.373 ] 00:17:20.373 }' 00:17:20.373 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.373 02:31:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.940 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:20.940 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.940 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:20.940 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:20.940 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.940 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.940 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.940 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.940 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.940 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.940 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.940 "name": "raid_bdev1", 00:17:20.940 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:20.940 "strip_size_kb": 0, 00:17:20.940 "state": "online", 00:17:20.940 "raid_level": "raid1", 00:17:20.940 "superblock": true, 00:17:20.940 "num_base_bdevs": 2, 00:17:20.940 "num_base_bdevs_discovered": 1, 00:17:20.940 "num_base_bdevs_operational": 1, 00:17:20.940 "base_bdevs_list": [ 00:17:20.940 { 00:17:20.940 "name": null, 00:17:20.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.940 "is_configured": false, 00:17:20.940 "data_offset": 0, 00:17:20.940 "data_size": 7936 00:17:20.940 }, 00:17:20.940 { 00:17:20.940 "name": "BaseBdev2", 00:17:20.940 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:20.940 "is_configured": true, 00:17:20.940 "data_offset": 256, 00:17:20.940 "data_size": 7936 00:17:20.940 } 00:17:20.940 ] 00:17:20.940 }' 00:17:20.940 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.940 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:20.940 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.941 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:20.941 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:20.941 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.941 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.941 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.941 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:20.941 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.941 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.941 [2024-10-13 02:31:39.500024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:20.941 [2024-10-13 02:31:39.500160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.941 [2024-10-13 02:31:39.500197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:20.941 [2024-10-13 02:31:39.500227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.941 [2024-10-13 02:31:39.500473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.941 [2024-10-13 02:31:39.500535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:20.941 [2024-10-13 02:31:39.500599] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:20.941 [2024-10-13 02:31:39.500621] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:20.941 [2024-10-13 02:31:39.500629] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:20.941 [2024-10-13 02:31:39.500641] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:20.941 BaseBdev1 00:17:20.941 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.941 02:31:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:21.876 02:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.876 02:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.877 02:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.877 02:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.877 02:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.877 02:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.877 02:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.877 02:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.877 02:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.877 02:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.877 02:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.877 02:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.877 02:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.877 02:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.877 02:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.135 02:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.135 "name": "raid_bdev1", 00:17:22.135 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:22.135 "strip_size_kb": 0, 00:17:22.135 "state": "online", 00:17:22.135 "raid_level": "raid1", 00:17:22.135 "superblock": true, 00:17:22.135 "num_base_bdevs": 2, 00:17:22.135 "num_base_bdevs_discovered": 1, 00:17:22.135 "num_base_bdevs_operational": 1, 00:17:22.135 "base_bdevs_list": [ 00:17:22.135 { 00:17:22.135 "name": null, 00:17:22.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.135 "is_configured": false, 00:17:22.135 "data_offset": 0, 00:17:22.135 "data_size": 7936 00:17:22.135 }, 00:17:22.135 { 00:17:22.135 "name": "BaseBdev2", 00:17:22.135 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:22.135 "is_configured": true, 00:17:22.135 "data_offset": 256, 00:17:22.135 "data_size": 7936 00:17:22.135 } 00:17:22.135 ] 00:17:22.135 }' 00:17:22.135 02:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.135 02:31:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.397 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:22.397 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.397 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:22.397 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:22.397 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.397 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.397 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.397 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.397 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.397 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.397 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.397 "name": "raid_bdev1", 00:17:22.397 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:22.397 "strip_size_kb": 0, 00:17:22.397 "state": "online", 00:17:22.397 "raid_level": "raid1", 00:17:22.397 "superblock": true, 00:17:22.397 "num_base_bdevs": 2, 00:17:22.397 "num_base_bdevs_discovered": 1, 00:17:22.397 "num_base_bdevs_operational": 1, 00:17:22.397 "base_bdevs_list": [ 00:17:22.397 { 00:17:22.397 "name": null, 00:17:22.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.397 "is_configured": false, 00:17:22.397 "data_offset": 0, 00:17:22.397 "data_size": 7936 00:17:22.397 }, 00:17:22.397 { 00:17:22.397 "name": "BaseBdev2", 00:17:22.397 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:22.397 "is_configured": true, 00:17:22.397 "data_offset": 256, 00:17:22.397 "data_size": 7936 00:17:22.397 } 00:17:22.397 ] 00:17:22.397 }' 00:17:22.397 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.656 [2024-10-13 02:31:41.180061] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.656 [2024-10-13 02:31:41.180305] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:22.656 [2024-10-13 02:31:41.180373] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:22.656 request: 00:17:22.656 { 00:17:22.656 "base_bdev": "BaseBdev1", 00:17:22.656 "raid_bdev": "raid_bdev1", 00:17:22.656 "method": "bdev_raid_add_base_bdev", 00:17:22.656 "req_id": 1 00:17:22.656 } 00:17:22.656 Got JSON-RPC error response 00:17:22.656 response: 00:17:22.656 { 00:17:22.656 "code": -22, 00:17:22.656 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:22.656 } 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:22.656 02:31:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:23.592 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:23.592 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.592 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.592 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.592 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.592 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:23.592 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.592 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.592 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.592 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.592 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.592 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.592 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.592 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.592 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.592 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.592 "name": "raid_bdev1", 00:17:23.592 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:23.592 "strip_size_kb": 0, 00:17:23.592 "state": "online", 00:17:23.592 "raid_level": "raid1", 00:17:23.592 "superblock": true, 00:17:23.592 "num_base_bdevs": 2, 00:17:23.592 "num_base_bdevs_discovered": 1, 00:17:23.592 "num_base_bdevs_operational": 1, 00:17:23.592 "base_bdevs_list": [ 00:17:23.592 { 00:17:23.592 "name": null, 00:17:23.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.592 "is_configured": false, 00:17:23.592 "data_offset": 0, 00:17:23.592 "data_size": 7936 00:17:23.592 }, 00:17:23.592 { 00:17:23.592 "name": "BaseBdev2", 00:17:23.592 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:23.592 "is_configured": true, 00:17:23.592 "data_offset": 256, 00:17:23.592 "data_size": 7936 00:17:23.592 } 00:17:23.592 ] 00:17:23.592 }' 00:17:23.592 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.592 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.160 "name": "raid_bdev1", 00:17:24.160 "uuid": "c1b63714-8d23-4df3-8e5d-e672acfd2dec", 00:17:24.160 "strip_size_kb": 0, 00:17:24.160 "state": "online", 00:17:24.160 "raid_level": "raid1", 00:17:24.160 "superblock": true, 00:17:24.160 "num_base_bdevs": 2, 00:17:24.160 "num_base_bdevs_discovered": 1, 00:17:24.160 "num_base_bdevs_operational": 1, 00:17:24.160 "base_bdevs_list": [ 00:17:24.160 { 00:17:24.160 "name": null, 00:17:24.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.160 "is_configured": false, 00:17:24.160 "data_offset": 0, 00:17:24.160 "data_size": 7936 00:17:24.160 }, 00:17:24.160 { 00:17:24.160 "name": "BaseBdev2", 00:17:24.160 "uuid": "7adb1601-c6a0-58b7-bc71-a0822052b700", 00:17:24.160 "is_configured": true, 00:17:24.160 "data_offset": 256, 00:17:24.160 "data_size": 7936 00:17:24.160 } 00:17:24.160 ] 00:17:24.160 }' 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 98041 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98041 ']' 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 98041 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98041 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:24.160 killing process with pid 98041 00:17:24.160 Received shutdown signal, test time was about 60.000000 seconds 00:17:24.160 00:17:24.160 Latency(us) 00:17:24.160 [2024-10-13T02:31:42.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.160 [2024-10-13T02:31:42.844Z] =================================================================================================================== 00:17:24.160 [2024-10-13T02:31:42.844Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98041' 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 98041 00:17:24.160 [2024-10-13 02:31:42.749342] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:24.160 02:31:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 98041 00:17:24.161 [2024-10-13 02:31:42.749495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:24.161 [2024-10-13 02:31:42.749548] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:24.161 [2024-10-13 02:31:42.749557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:17:24.161 [2024-10-13 02:31:42.783610] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:24.420 02:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:17:24.420 ************************************ 00:17:24.420 END TEST raid_rebuild_test_sb_md_separate 00:17:24.420 ************************************ 00:17:24.420 00:17:24.420 real 0m18.380s 00:17:24.420 user 0m24.367s 00:17:24.420 sys 0m2.726s 00:17:24.420 02:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:24.420 02:31:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.420 02:31:43 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:17:24.420 02:31:43 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:24.420 02:31:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:24.420 02:31:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:24.420 02:31:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:24.420 ************************************ 00:17:24.420 START TEST raid_state_function_test_sb_md_interleaved 00:17:24.420 ************************************ 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:24.420 Process raid pid: 98723 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=98723 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98723' 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 98723 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98723 ']' 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:24.420 02:31:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:24.679 [2024-10-13 02:31:43.170326] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:24.679 [2024-10-13 02:31:43.170546] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.679 [2024-10-13 02:31:43.297401] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.679 [2024-10-13 02:31:43.349254] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.938 [2024-10-13 02:31:43.391416] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.938 [2024-10-13 02:31:43.391533] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.505 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:25.505 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:25.506 [2024-10-13 02:31:44.049079] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:25.506 [2024-10-13 02:31:44.049191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:25.506 [2024-10-13 02:31:44.049346] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:25.506 [2024-10-13 02:31:44.049372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.506 "name": "Existed_Raid", 00:17:25.506 "uuid": "1f67de5f-d50a-4496-a749-e10f96d79f42", 00:17:25.506 "strip_size_kb": 0, 00:17:25.506 "state": "configuring", 00:17:25.506 "raid_level": "raid1", 00:17:25.506 "superblock": true, 00:17:25.506 "num_base_bdevs": 2, 00:17:25.506 "num_base_bdevs_discovered": 0, 00:17:25.506 "num_base_bdevs_operational": 2, 00:17:25.506 "base_bdevs_list": [ 00:17:25.506 { 00:17:25.506 "name": "BaseBdev1", 00:17:25.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.506 "is_configured": false, 00:17:25.506 "data_offset": 0, 00:17:25.506 "data_size": 0 00:17:25.506 }, 00:17:25.506 { 00:17:25.506 "name": "BaseBdev2", 00:17:25.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.506 "is_configured": false, 00:17:25.506 "data_offset": 0, 00:17:25.506 "data_size": 0 00:17:25.506 } 00:17:25.506 ] 00:17:25.506 }' 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.506 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.074 [2024-10-13 02:31:44.532093] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:26.074 [2024-10-13 02:31:44.532224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.074 [2024-10-13 02:31:44.544081] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:26.074 [2024-10-13 02:31:44.544172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:26.074 [2024-10-13 02:31:44.544226] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.074 [2024-10-13 02:31:44.544251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.074 [2024-10-13 02:31:44.565272] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.074 BaseBdev1 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.074 [ 00:17:26.074 { 00:17:26.074 "name": "BaseBdev1", 00:17:26.074 "aliases": [ 00:17:26.074 "e0774e01-2389-432a-bc7b-6174248efad0" 00:17:26.074 ], 00:17:26.074 "product_name": "Malloc disk", 00:17:26.074 "block_size": 4128, 00:17:26.074 "num_blocks": 8192, 00:17:26.074 "uuid": "e0774e01-2389-432a-bc7b-6174248efad0", 00:17:26.074 "md_size": 32, 00:17:26.074 "md_interleave": true, 00:17:26.074 "dif_type": 0, 00:17:26.074 "assigned_rate_limits": { 00:17:26.074 "rw_ios_per_sec": 0, 00:17:26.074 "rw_mbytes_per_sec": 0, 00:17:26.074 "r_mbytes_per_sec": 0, 00:17:26.074 "w_mbytes_per_sec": 0 00:17:26.074 }, 00:17:26.074 "claimed": true, 00:17:26.074 "claim_type": "exclusive_write", 00:17:26.074 "zoned": false, 00:17:26.074 "supported_io_types": { 00:17:26.074 "read": true, 00:17:26.074 "write": true, 00:17:26.074 "unmap": true, 00:17:26.074 "flush": true, 00:17:26.074 "reset": true, 00:17:26.074 "nvme_admin": false, 00:17:26.074 "nvme_io": false, 00:17:26.074 "nvme_io_md": false, 00:17:26.074 "write_zeroes": true, 00:17:26.074 "zcopy": true, 00:17:26.074 "get_zone_info": false, 00:17:26.074 "zone_management": false, 00:17:26.074 "zone_append": false, 00:17:26.074 "compare": false, 00:17:26.074 "compare_and_write": false, 00:17:26.074 "abort": true, 00:17:26.074 "seek_hole": false, 00:17:26.074 "seek_data": false, 00:17:26.074 "copy": true, 00:17:26.074 "nvme_iov_md": false 00:17:26.074 }, 00:17:26.074 "memory_domains": [ 00:17:26.074 { 00:17:26.074 "dma_device_id": "system", 00:17:26.074 "dma_device_type": 1 00:17:26.074 }, 00:17:26.074 { 00:17:26.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.074 "dma_device_type": 2 00:17:26.074 } 00:17:26.074 ], 00:17:26.074 "driver_specific": {} 00:17:26.074 } 00:17:26.074 ] 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.074 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.075 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.075 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.075 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.075 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.075 "name": "Existed_Raid", 00:17:26.075 "uuid": "818ac8dd-46be-4af2-83af-32a75ee5432b", 00:17:26.075 "strip_size_kb": 0, 00:17:26.075 "state": "configuring", 00:17:26.075 "raid_level": "raid1", 00:17:26.075 "superblock": true, 00:17:26.075 "num_base_bdevs": 2, 00:17:26.075 "num_base_bdevs_discovered": 1, 00:17:26.075 "num_base_bdevs_operational": 2, 00:17:26.075 "base_bdevs_list": [ 00:17:26.075 { 00:17:26.075 "name": "BaseBdev1", 00:17:26.075 "uuid": "e0774e01-2389-432a-bc7b-6174248efad0", 00:17:26.075 "is_configured": true, 00:17:26.075 "data_offset": 256, 00:17:26.075 "data_size": 7936 00:17:26.075 }, 00:17:26.075 { 00:17:26.075 "name": "BaseBdev2", 00:17:26.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.075 "is_configured": false, 00:17:26.075 "data_offset": 0, 00:17:26.075 "data_size": 0 00:17:26.075 } 00:17:26.075 ] 00:17:26.075 }' 00:17:26.075 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.075 02:31:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.643 [2024-10-13 02:31:45.064513] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:26.643 [2024-10-13 02:31:45.064649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.643 [2024-10-13 02:31:45.076510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.643 [2024-10-13 02:31:45.078440] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.643 [2024-10-13 02:31:45.078521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.643 "name": "Existed_Raid", 00:17:26.643 "uuid": "96a11372-b447-4cd5-81a8-2e50db57cf0d", 00:17:26.643 "strip_size_kb": 0, 00:17:26.643 "state": "configuring", 00:17:26.643 "raid_level": "raid1", 00:17:26.643 "superblock": true, 00:17:26.643 "num_base_bdevs": 2, 00:17:26.643 "num_base_bdevs_discovered": 1, 00:17:26.643 "num_base_bdevs_operational": 2, 00:17:26.643 "base_bdevs_list": [ 00:17:26.643 { 00:17:26.643 "name": "BaseBdev1", 00:17:26.643 "uuid": "e0774e01-2389-432a-bc7b-6174248efad0", 00:17:26.643 "is_configured": true, 00:17:26.643 "data_offset": 256, 00:17:26.643 "data_size": 7936 00:17:26.643 }, 00:17:26.643 { 00:17:26.643 "name": "BaseBdev2", 00:17:26.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.643 "is_configured": false, 00:17:26.643 "data_offset": 0, 00:17:26.643 "data_size": 0 00:17:26.643 } 00:17:26.643 ] 00:17:26.643 }' 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.643 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.903 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:26.903 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.903 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.903 [2024-10-13 02:31:45.534831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:26.903 [2024-10-13 02:31:45.535212] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:17:26.903 [2024-10-13 02:31:45.535280] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:26.903 [2024-10-13 02:31:45.535456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:17:26.903 [2024-10-13 02:31:45.535600] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:17:26.903 [2024-10-13 02:31:45.535655] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:17:26.903 [2024-10-13 02:31:45.535773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.903 BaseBdev2 00:17:26.903 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.903 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:26.903 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:26.903 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:26.903 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:17:26.903 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:26.903 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:26.903 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:26.903 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.903 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.903 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.903 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:26.903 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.904 [ 00:17:26.904 { 00:17:26.904 "name": "BaseBdev2", 00:17:26.904 "aliases": [ 00:17:26.904 "ea67289b-fb6d-4d2f-a228-28617b2c9fbe" 00:17:26.904 ], 00:17:26.904 "product_name": "Malloc disk", 00:17:26.904 "block_size": 4128, 00:17:26.904 "num_blocks": 8192, 00:17:26.904 "uuid": "ea67289b-fb6d-4d2f-a228-28617b2c9fbe", 00:17:26.904 "md_size": 32, 00:17:26.904 "md_interleave": true, 00:17:26.904 "dif_type": 0, 00:17:26.904 "assigned_rate_limits": { 00:17:26.904 "rw_ios_per_sec": 0, 00:17:26.904 "rw_mbytes_per_sec": 0, 00:17:26.904 "r_mbytes_per_sec": 0, 00:17:26.904 "w_mbytes_per_sec": 0 00:17:26.904 }, 00:17:26.904 "claimed": true, 00:17:26.904 "claim_type": "exclusive_write", 00:17:26.904 "zoned": false, 00:17:26.904 "supported_io_types": { 00:17:26.904 "read": true, 00:17:26.904 "write": true, 00:17:26.904 "unmap": true, 00:17:26.904 "flush": true, 00:17:26.904 "reset": true, 00:17:26.904 "nvme_admin": false, 00:17:26.904 "nvme_io": false, 00:17:26.904 "nvme_io_md": false, 00:17:26.904 "write_zeroes": true, 00:17:26.904 "zcopy": true, 00:17:26.904 "get_zone_info": false, 00:17:26.904 "zone_management": false, 00:17:26.904 "zone_append": false, 00:17:26.904 "compare": false, 00:17:26.904 "compare_and_write": false, 00:17:26.904 "abort": true, 00:17:26.904 "seek_hole": false, 00:17:26.904 "seek_data": false, 00:17:26.904 "copy": true, 00:17:26.904 "nvme_iov_md": false 00:17:26.904 }, 00:17:26.904 "memory_domains": [ 00:17:26.904 { 00:17:26.904 "dma_device_id": "system", 00:17:26.904 "dma_device_type": 1 00:17:26.904 }, 00:17:26.904 { 00:17:26.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.904 "dma_device_type": 2 00:17:26.904 } 00:17:26.904 ], 00:17:26.904 "driver_specific": {} 00:17:26.904 } 00:17:26.904 ] 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:26.904 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.163 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.163 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.163 "name": "Existed_Raid", 00:17:27.163 "uuid": "96a11372-b447-4cd5-81a8-2e50db57cf0d", 00:17:27.163 "strip_size_kb": 0, 00:17:27.163 "state": "online", 00:17:27.163 "raid_level": "raid1", 00:17:27.163 "superblock": true, 00:17:27.163 "num_base_bdevs": 2, 00:17:27.163 "num_base_bdevs_discovered": 2, 00:17:27.163 "num_base_bdevs_operational": 2, 00:17:27.163 "base_bdevs_list": [ 00:17:27.163 { 00:17:27.163 "name": "BaseBdev1", 00:17:27.163 "uuid": "e0774e01-2389-432a-bc7b-6174248efad0", 00:17:27.163 "is_configured": true, 00:17:27.163 "data_offset": 256, 00:17:27.163 "data_size": 7936 00:17:27.163 }, 00:17:27.163 { 00:17:27.163 "name": "BaseBdev2", 00:17:27.163 "uuid": "ea67289b-fb6d-4d2f-a228-28617b2c9fbe", 00:17:27.163 "is_configured": true, 00:17:27.163 "data_offset": 256, 00:17:27.163 "data_size": 7936 00:17:27.163 } 00:17:27.163 ] 00:17:27.163 }' 00:17:27.163 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.163 02:31:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:27.422 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:27.422 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:27.422 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:27.422 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:27.422 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:27.422 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:27.422 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:27.422 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:27.422 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.422 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:27.422 [2024-10-13 02:31:46.058316] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:27.422 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.422 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:27.422 "name": "Existed_Raid", 00:17:27.422 "aliases": [ 00:17:27.422 "96a11372-b447-4cd5-81a8-2e50db57cf0d" 00:17:27.422 ], 00:17:27.422 "product_name": "Raid Volume", 00:17:27.422 "block_size": 4128, 00:17:27.422 "num_blocks": 7936, 00:17:27.422 "uuid": "96a11372-b447-4cd5-81a8-2e50db57cf0d", 00:17:27.422 "md_size": 32, 00:17:27.422 "md_interleave": true, 00:17:27.422 "dif_type": 0, 00:17:27.422 "assigned_rate_limits": { 00:17:27.422 "rw_ios_per_sec": 0, 00:17:27.422 "rw_mbytes_per_sec": 0, 00:17:27.422 "r_mbytes_per_sec": 0, 00:17:27.422 "w_mbytes_per_sec": 0 00:17:27.422 }, 00:17:27.422 "claimed": false, 00:17:27.422 "zoned": false, 00:17:27.422 "supported_io_types": { 00:17:27.422 "read": true, 00:17:27.422 "write": true, 00:17:27.422 "unmap": false, 00:17:27.422 "flush": false, 00:17:27.422 "reset": true, 00:17:27.422 "nvme_admin": false, 00:17:27.422 "nvme_io": false, 00:17:27.422 "nvme_io_md": false, 00:17:27.422 "write_zeroes": true, 00:17:27.422 "zcopy": false, 00:17:27.422 "get_zone_info": false, 00:17:27.422 "zone_management": false, 00:17:27.422 "zone_append": false, 00:17:27.422 "compare": false, 00:17:27.422 "compare_and_write": false, 00:17:27.422 "abort": false, 00:17:27.422 "seek_hole": false, 00:17:27.422 "seek_data": false, 00:17:27.422 "copy": false, 00:17:27.422 "nvme_iov_md": false 00:17:27.422 }, 00:17:27.422 "memory_domains": [ 00:17:27.422 { 00:17:27.422 "dma_device_id": "system", 00:17:27.422 "dma_device_type": 1 00:17:27.422 }, 00:17:27.422 { 00:17:27.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.422 "dma_device_type": 2 00:17:27.422 }, 00:17:27.422 { 00:17:27.422 "dma_device_id": "system", 00:17:27.422 "dma_device_type": 1 00:17:27.422 }, 00:17:27.422 { 00:17:27.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.422 "dma_device_type": 2 00:17:27.422 } 00:17:27.422 ], 00:17:27.422 "driver_specific": { 00:17:27.422 "raid": { 00:17:27.422 "uuid": "96a11372-b447-4cd5-81a8-2e50db57cf0d", 00:17:27.422 "strip_size_kb": 0, 00:17:27.422 "state": "online", 00:17:27.422 "raid_level": "raid1", 00:17:27.422 "superblock": true, 00:17:27.422 "num_base_bdevs": 2, 00:17:27.422 "num_base_bdevs_discovered": 2, 00:17:27.422 "num_base_bdevs_operational": 2, 00:17:27.422 "base_bdevs_list": [ 00:17:27.422 { 00:17:27.422 "name": "BaseBdev1", 00:17:27.422 "uuid": "e0774e01-2389-432a-bc7b-6174248efad0", 00:17:27.422 "is_configured": true, 00:17:27.422 "data_offset": 256, 00:17:27.422 "data_size": 7936 00:17:27.422 }, 00:17:27.422 { 00:17:27.422 "name": "BaseBdev2", 00:17:27.422 "uuid": "ea67289b-fb6d-4d2f-a228-28617b2c9fbe", 00:17:27.422 "is_configured": true, 00:17:27.422 "data_offset": 256, 00:17:27.422 "data_size": 7936 00:17:27.422 } 00:17:27.422 ] 00:17:27.422 } 00:17:27.422 } 00:17:27.422 }' 00:17:27.422 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:27.681 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:27.681 BaseBdev2' 00:17:27.681 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:27.681 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:27.681 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:27.681 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:27.681 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:27.681 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.681 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:27.681 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.681 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:27.681 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:27.681 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:27.681 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:27.681 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:27.681 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.681 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:27.681 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.681 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:27.682 [2024-10-13 02:31:46.277752] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.682 "name": "Existed_Raid", 00:17:27.682 "uuid": "96a11372-b447-4cd5-81a8-2e50db57cf0d", 00:17:27.682 "strip_size_kb": 0, 00:17:27.682 "state": "online", 00:17:27.682 "raid_level": "raid1", 00:17:27.682 "superblock": true, 00:17:27.682 "num_base_bdevs": 2, 00:17:27.682 "num_base_bdevs_discovered": 1, 00:17:27.682 "num_base_bdevs_operational": 1, 00:17:27.682 "base_bdevs_list": [ 00:17:27.682 { 00:17:27.682 "name": null, 00:17:27.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.682 "is_configured": false, 00:17:27.682 "data_offset": 0, 00:17:27.682 "data_size": 7936 00:17:27.682 }, 00:17:27.682 { 00:17:27.682 "name": "BaseBdev2", 00:17:27.682 "uuid": "ea67289b-fb6d-4d2f-a228-28617b2c9fbe", 00:17:27.682 "is_configured": true, 00:17:27.682 "data_offset": 256, 00:17:27.682 "data_size": 7936 00:17:27.682 } 00:17:27.682 ] 00:17:27.682 }' 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.682 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.250 [2024-10-13 02:31:46.816722] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:28.250 [2024-10-13 02:31:46.816944] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:28.250 [2024-10-13 02:31:46.828919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.250 [2024-10-13 02:31:46.828969] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:28.250 [2024-10-13 02:31:46.828981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 98723 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98723 ']' 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98723 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98723 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98723' 00:17:28.250 killing process with pid 98723 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 98723 00:17:28.250 [2024-10-13 02:31:46.931511] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:28.250 02:31:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 98723 00:17:28.509 [2024-10-13 02:31:46.932710] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:28.509 02:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:17:28.509 00:17:28.509 real 0m4.098s 00:17:28.509 user 0m6.441s 00:17:28.509 sys 0m0.884s 00:17:28.509 02:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:28.509 02:31:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.509 ************************************ 00:17:28.509 END TEST raid_state_function_test_sb_md_interleaved 00:17:28.509 ************************************ 00:17:28.768 02:31:47 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:28.768 02:31:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:28.768 02:31:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:28.768 02:31:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:28.768 ************************************ 00:17:28.768 START TEST raid_superblock_test_md_interleaved 00:17:28.768 ************************************ 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=98965 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 98965 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98965 ']' 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:28.768 02:31:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:28.768 [2024-10-13 02:31:47.339608] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:28.768 [2024-10-13 02:31:47.339829] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98965 ] 00:17:29.027 [2024-10-13 02:31:47.471500] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.027 [2024-10-13 02:31:47.522479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.027 [2024-10-13 02:31:47.564527] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.027 [2024-10-13 02:31:47.564653] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:29.595 malloc1 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:29.595 [2024-10-13 02:31:48.267035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:29.595 [2024-10-13 02:31:48.267153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.595 [2024-10-13 02:31:48.267198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:29.595 [2024-10-13 02:31:48.267238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.595 [2024-10-13 02:31:48.269232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.595 [2024-10-13 02:31:48.269304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:29.595 pt1 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.595 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:29.854 malloc2 00:17:29.854 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.854 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:29.855 [2024-10-13 02:31:48.304079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:29.855 [2024-10-13 02:31:48.304207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.855 [2024-10-13 02:31:48.304252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:29.855 [2024-10-13 02:31:48.304289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.855 [2024-10-13 02:31:48.306369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.855 [2024-10-13 02:31:48.306440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:29.855 pt2 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:29.855 [2024-10-13 02:31:48.316085] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:29.855 [2024-10-13 02:31:48.318017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:29.855 [2024-10-13 02:31:48.318252] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:17:29.855 [2024-10-13 02:31:48.318302] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:29.855 [2024-10-13 02:31:48.318417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:17:29.855 [2024-10-13 02:31:48.318522] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:17:29.855 [2024-10-13 02:31:48.318562] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:17:29.855 [2024-10-13 02:31:48.318689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.855 "name": "raid_bdev1", 00:17:29.855 "uuid": "d54c3be0-34ee-444a-951f-91f74f3470d2", 00:17:29.855 "strip_size_kb": 0, 00:17:29.855 "state": "online", 00:17:29.855 "raid_level": "raid1", 00:17:29.855 "superblock": true, 00:17:29.855 "num_base_bdevs": 2, 00:17:29.855 "num_base_bdevs_discovered": 2, 00:17:29.855 "num_base_bdevs_operational": 2, 00:17:29.855 "base_bdevs_list": [ 00:17:29.855 { 00:17:29.855 "name": "pt1", 00:17:29.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:29.855 "is_configured": true, 00:17:29.855 "data_offset": 256, 00:17:29.855 "data_size": 7936 00:17:29.855 }, 00:17:29.855 { 00:17:29.855 "name": "pt2", 00:17:29.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.855 "is_configured": true, 00:17:29.855 "data_offset": 256, 00:17:29.855 "data_size": 7936 00:17:29.855 } 00:17:29.855 ] 00:17:29.855 }' 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.855 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.114 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:30.114 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:30.114 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:30.114 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:30.114 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:30.114 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:30.114 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:30.114 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.114 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.114 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:30.114 [2024-10-13 02:31:48.743730] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.114 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.114 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:30.114 "name": "raid_bdev1", 00:17:30.114 "aliases": [ 00:17:30.114 "d54c3be0-34ee-444a-951f-91f74f3470d2" 00:17:30.114 ], 00:17:30.114 "product_name": "Raid Volume", 00:17:30.114 "block_size": 4128, 00:17:30.114 "num_blocks": 7936, 00:17:30.114 "uuid": "d54c3be0-34ee-444a-951f-91f74f3470d2", 00:17:30.114 "md_size": 32, 00:17:30.114 "md_interleave": true, 00:17:30.114 "dif_type": 0, 00:17:30.114 "assigned_rate_limits": { 00:17:30.114 "rw_ios_per_sec": 0, 00:17:30.114 "rw_mbytes_per_sec": 0, 00:17:30.114 "r_mbytes_per_sec": 0, 00:17:30.114 "w_mbytes_per_sec": 0 00:17:30.114 }, 00:17:30.114 "claimed": false, 00:17:30.114 "zoned": false, 00:17:30.114 "supported_io_types": { 00:17:30.114 "read": true, 00:17:30.114 "write": true, 00:17:30.114 "unmap": false, 00:17:30.114 "flush": false, 00:17:30.114 "reset": true, 00:17:30.114 "nvme_admin": false, 00:17:30.114 "nvme_io": false, 00:17:30.114 "nvme_io_md": false, 00:17:30.114 "write_zeroes": true, 00:17:30.114 "zcopy": false, 00:17:30.114 "get_zone_info": false, 00:17:30.114 "zone_management": false, 00:17:30.114 "zone_append": false, 00:17:30.114 "compare": false, 00:17:30.114 "compare_and_write": false, 00:17:30.114 "abort": false, 00:17:30.114 "seek_hole": false, 00:17:30.114 "seek_data": false, 00:17:30.114 "copy": false, 00:17:30.114 "nvme_iov_md": false 00:17:30.114 }, 00:17:30.114 "memory_domains": [ 00:17:30.114 { 00:17:30.114 "dma_device_id": "system", 00:17:30.114 "dma_device_type": 1 00:17:30.114 }, 00:17:30.114 { 00:17:30.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.114 "dma_device_type": 2 00:17:30.114 }, 00:17:30.114 { 00:17:30.114 "dma_device_id": "system", 00:17:30.114 "dma_device_type": 1 00:17:30.114 }, 00:17:30.114 { 00:17:30.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.114 "dma_device_type": 2 00:17:30.114 } 00:17:30.114 ], 00:17:30.114 "driver_specific": { 00:17:30.114 "raid": { 00:17:30.114 "uuid": "d54c3be0-34ee-444a-951f-91f74f3470d2", 00:17:30.115 "strip_size_kb": 0, 00:17:30.115 "state": "online", 00:17:30.115 "raid_level": "raid1", 00:17:30.115 "superblock": true, 00:17:30.115 "num_base_bdevs": 2, 00:17:30.115 "num_base_bdevs_discovered": 2, 00:17:30.115 "num_base_bdevs_operational": 2, 00:17:30.115 "base_bdevs_list": [ 00:17:30.115 { 00:17:30.115 "name": "pt1", 00:17:30.115 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:30.115 "is_configured": true, 00:17:30.115 "data_offset": 256, 00:17:30.115 "data_size": 7936 00:17:30.115 }, 00:17:30.115 { 00:17:30.115 "name": "pt2", 00:17:30.115 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.115 "is_configured": true, 00:17:30.115 "data_offset": 256, 00:17:30.115 "data_size": 7936 00:17:30.115 } 00:17:30.115 ] 00:17:30.115 } 00:17:30.115 } 00:17:30.115 }' 00:17:30.115 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:30.374 pt2' 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.374 02:31:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.374 [2024-10-13 02:31:48.983295] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.374 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.374 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d54c3be0-34ee-444a-951f-91f74f3470d2 00:17:30.374 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z d54c3be0-34ee-444a-951f-91f74f3470d2 ']' 00:17:30.374 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:30.374 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.374 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.374 [2024-10-13 02:31:49.030947] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:30.374 [2024-10-13 02:31:49.031039] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.374 [2024-10-13 02:31:49.031186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.374 [2024-10-13 02:31:49.031293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.374 [2024-10-13 02:31:49.031350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:17:30.374 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.374 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:30.374 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.374 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.374 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.374 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.633 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.634 [2024-10-13 02:31:49.158727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:30.634 [2024-10-13 02:31:49.160906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:30.634 [2024-10-13 02:31:49.161037] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:30.634 [2024-10-13 02:31:49.161131] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:30.634 [2024-10-13 02:31:49.161175] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:30.634 [2024-10-13 02:31:49.161217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:17:30.634 request: 00:17:30.634 { 00:17:30.634 "name": "raid_bdev1", 00:17:30.634 "raid_level": "raid1", 00:17:30.634 "base_bdevs": [ 00:17:30.634 "malloc1", 00:17:30.634 "malloc2" 00:17:30.634 ], 00:17:30.634 "superblock": false, 00:17:30.634 "method": "bdev_raid_create", 00:17:30.634 "req_id": 1 00:17:30.634 } 00:17:30.634 Got JSON-RPC error response 00:17:30.634 response: 00:17:30.634 { 00:17:30.634 "code": -17, 00:17:30.634 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:30.634 } 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.634 [2024-10-13 02:31:49.214583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:30.634 [2024-10-13 02:31:49.214703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.634 [2024-10-13 02:31:49.214742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:30.634 [2024-10-13 02:31:49.214770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.634 [2024-10-13 02:31:49.216869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.634 [2024-10-13 02:31:49.216961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:30.634 [2024-10-13 02:31:49.217073] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:30.634 [2024-10-13 02:31:49.217149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:30.634 pt1 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.634 "name": "raid_bdev1", 00:17:30.634 "uuid": "d54c3be0-34ee-444a-951f-91f74f3470d2", 00:17:30.634 "strip_size_kb": 0, 00:17:30.634 "state": "configuring", 00:17:30.634 "raid_level": "raid1", 00:17:30.634 "superblock": true, 00:17:30.634 "num_base_bdevs": 2, 00:17:30.634 "num_base_bdevs_discovered": 1, 00:17:30.634 "num_base_bdevs_operational": 2, 00:17:30.634 "base_bdevs_list": [ 00:17:30.634 { 00:17:30.634 "name": "pt1", 00:17:30.634 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:30.634 "is_configured": true, 00:17:30.634 "data_offset": 256, 00:17:30.634 "data_size": 7936 00:17:30.634 }, 00:17:30.634 { 00:17:30.634 "name": null, 00:17:30.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.634 "is_configured": false, 00:17:30.634 "data_offset": 256, 00:17:30.634 "data_size": 7936 00:17:30.634 } 00:17:30.634 ] 00:17:30.634 }' 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.634 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.201 [2024-10-13 02:31:49.641890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:31.201 [2024-10-13 02:31:49.642033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.201 [2024-10-13 02:31:49.642074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:31.201 [2024-10-13 02:31:49.642102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.201 [2024-10-13 02:31:49.642337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.201 [2024-10-13 02:31:49.642390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:31.201 [2024-10-13 02:31:49.642477] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:31.201 [2024-10-13 02:31:49.642538] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:31.201 [2024-10-13 02:31:49.642674] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:17:31.201 [2024-10-13 02:31:49.642717] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:31.201 [2024-10-13 02:31:49.642821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:17:31.201 [2024-10-13 02:31:49.642934] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:17:31.201 [2024-10-13 02:31:49.642980] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:17:31.201 [2024-10-13 02:31:49.643085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.201 pt2 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.201 "name": "raid_bdev1", 00:17:31.201 "uuid": "d54c3be0-34ee-444a-951f-91f74f3470d2", 00:17:31.201 "strip_size_kb": 0, 00:17:31.201 "state": "online", 00:17:31.201 "raid_level": "raid1", 00:17:31.201 "superblock": true, 00:17:31.201 "num_base_bdevs": 2, 00:17:31.201 "num_base_bdevs_discovered": 2, 00:17:31.201 "num_base_bdevs_operational": 2, 00:17:31.201 "base_bdevs_list": [ 00:17:31.201 { 00:17:31.201 "name": "pt1", 00:17:31.201 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:31.201 "is_configured": true, 00:17:31.201 "data_offset": 256, 00:17:31.201 "data_size": 7936 00:17:31.201 }, 00:17:31.201 { 00:17:31.201 "name": "pt2", 00:17:31.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.201 "is_configured": true, 00:17:31.201 "data_offset": 256, 00:17:31.201 "data_size": 7936 00:17:31.201 } 00:17:31.201 ] 00:17:31.201 }' 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.201 02:31:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.460 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:31.460 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:31.460 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:31.460 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:31.460 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:31.460 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:31.460 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:31.460 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:31.460 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.460 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.460 [2024-10-13 02:31:50.109421] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.460 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:31.720 "name": "raid_bdev1", 00:17:31.720 "aliases": [ 00:17:31.720 "d54c3be0-34ee-444a-951f-91f74f3470d2" 00:17:31.720 ], 00:17:31.720 "product_name": "Raid Volume", 00:17:31.720 "block_size": 4128, 00:17:31.720 "num_blocks": 7936, 00:17:31.720 "uuid": "d54c3be0-34ee-444a-951f-91f74f3470d2", 00:17:31.720 "md_size": 32, 00:17:31.720 "md_interleave": true, 00:17:31.720 "dif_type": 0, 00:17:31.720 "assigned_rate_limits": { 00:17:31.720 "rw_ios_per_sec": 0, 00:17:31.720 "rw_mbytes_per_sec": 0, 00:17:31.720 "r_mbytes_per_sec": 0, 00:17:31.720 "w_mbytes_per_sec": 0 00:17:31.720 }, 00:17:31.720 "claimed": false, 00:17:31.720 "zoned": false, 00:17:31.720 "supported_io_types": { 00:17:31.720 "read": true, 00:17:31.720 "write": true, 00:17:31.720 "unmap": false, 00:17:31.720 "flush": false, 00:17:31.720 "reset": true, 00:17:31.720 "nvme_admin": false, 00:17:31.720 "nvme_io": false, 00:17:31.720 "nvme_io_md": false, 00:17:31.720 "write_zeroes": true, 00:17:31.720 "zcopy": false, 00:17:31.720 "get_zone_info": false, 00:17:31.720 "zone_management": false, 00:17:31.720 "zone_append": false, 00:17:31.720 "compare": false, 00:17:31.720 "compare_and_write": false, 00:17:31.720 "abort": false, 00:17:31.720 "seek_hole": false, 00:17:31.720 "seek_data": false, 00:17:31.720 "copy": false, 00:17:31.720 "nvme_iov_md": false 00:17:31.720 }, 00:17:31.720 "memory_domains": [ 00:17:31.720 { 00:17:31.720 "dma_device_id": "system", 00:17:31.720 "dma_device_type": 1 00:17:31.720 }, 00:17:31.720 { 00:17:31.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.720 "dma_device_type": 2 00:17:31.720 }, 00:17:31.720 { 00:17:31.720 "dma_device_id": "system", 00:17:31.720 "dma_device_type": 1 00:17:31.720 }, 00:17:31.720 { 00:17:31.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.720 "dma_device_type": 2 00:17:31.720 } 00:17:31.720 ], 00:17:31.720 "driver_specific": { 00:17:31.720 "raid": { 00:17:31.720 "uuid": "d54c3be0-34ee-444a-951f-91f74f3470d2", 00:17:31.720 "strip_size_kb": 0, 00:17:31.720 "state": "online", 00:17:31.720 "raid_level": "raid1", 00:17:31.720 "superblock": true, 00:17:31.720 "num_base_bdevs": 2, 00:17:31.720 "num_base_bdevs_discovered": 2, 00:17:31.720 "num_base_bdevs_operational": 2, 00:17:31.720 "base_bdevs_list": [ 00:17:31.720 { 00:17:31.720 "name": "pt1", 00:17:31.720 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:31.720 "is_configured": true, 00:17:31.720 "data_offset": 256, 00:17:31.720 "data_size": 7936 00:17:31.720 }, 00:17:31.720 { 00:17:31.720 "name": "pt2", 00:17:31.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.720 "is_configured": true, 00:17:31.720 "data_offset": 256, 00:17:31.720 "data_size": 7936 00:17:31.720 } 00:17:31.720 ] 00:17:31.720 } 00:17:31.720 } 00:17:31.720 }' 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:31.720 pt2' 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.720 [2024-10-13 02:31:50.365031] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.720 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' d54c3be0-34ee-444a-951f-91f74f3470d2 '!=' d54c3be0-34ee-444a-951f-91f74f3470d2 ']' 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.979 [2024-10-13 02:31:50.408752] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.979 "name": "raid_bdev1", 00:17:31.979 "uuid": "d54c3be0-34ee-444a-951f-91f74f3470d2", 00:17:31.979 "strip_size_kb": 0, 00:17:31.979 "state": "online", 00:17:31.979 "raid_level": "raid1", 00:17:31.979 "superblock": true, 00:17:31.979 "num_base_bdevs": 2, 00:17:31.979 "num_base_bdevs_discovered": 1, 00:17:31.979 "num_base_bdevs_operational": 1, 00:17:31.979 "base_bdevs_list": [ 00:17:31.979 { 00:17:31.979 "name": null, 00:17:31.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.979 "is_configured": false, 00:17:31.979 "data_offset": 0, 00:17:31.979 "data_size": 7936 00:17:31.979 }, 00:17:31.979 { 00:17:31.979 "name": "pt2", 00:17:31.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.979 "is_configured": true, 00:17:31.979 "data_offset": 256, 00:17:31.979 "data_size": 7936 00:17:31.979 } 00:17:31.979 ] 00:17:31.979 }' 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.979 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.238 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:32.238 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.238 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.238 [2024-10-13 02:31:50.880011] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.238 [2024-10-13 02:31:50.880046] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.238 [2024-10-13 02:31:50.880125] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.238 [2024-10-13 02:31:50.880173] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.238 [2024-10-13 02:31:50.880182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:17:32.238 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.238 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.238 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.238 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:32.238 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.238 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.497 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:32.497 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:32.497 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:32.497 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:32.497 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:32.497 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.497 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.497 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.497 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.498 [2024-10-13 02:31:50.947864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:32.498 [2024-10-13 02:31:50.948026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.498 [2024-10-13 02:31:50.948068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:17:32.498 [2024-10-13 02:31:50.948096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.498 [2024-10-13 02:31:50.950117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.498 [2024-10-13 02:31:50.950188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:32.498 [2024-10-13 02:31:50.950272] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:32.498 [2024-10-13 02:31:50.950338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:32.498 [2024-10-13 02:31:50.950424] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:17:32.498 [2024-10-13 02:31:50.950449] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:32.498 [2024-10-13 02:31:50.950558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:17:32.498 [2024-10-13 02:31:50.950648] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:17:32.498 [2024-10-13 02:31:50.950686] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:17:32.498 [2024-10-13 02:31:50.950789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.498 pt2 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.498 "name": "raid_bdev1", 00:17:32.498 "uuid": "d54c3be0-34ee-444a-951f-91f74f3470d2", 00:17:32.498 "strip_size_kb": 0, 00:17:32.498 "state": "online", 00:17:32.498 "raid_level": "raid1", 00:17:32.498 "superblock": true, 00:17:32.498 "num_base_bdevs": 2, 00:17:32.498 "num_base_bdevs_discovered": 1, 00:17:32.498 "num_base_bdevs_operational": 1, 00:17:32.498 "base_bdevs_list": [ 00:17:32.498 { 00:17:32.498 "name": null, 00:17:32.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.498 "is_configured": false, 00:17:32.498 "data_offset": 256, 00:17:32.498 "data_size": 7936 00:17:32.498 }, 00:17:32.498 { 00:17:32.498 "name": "pt2", 00:17:32.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.498 "is_configured": true, 00:17:32.498 "data_offset": 256, 00:17:32.498 "data_size": 7936 00:17:32.498 } 00:17:32.498 ] 00:17:32.498 }' 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.498 02:31:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.758 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:32.758 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.758 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.758 [2024-10-13 02:31:51.395087] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.758 [2024-10-13 02:31:51.395171] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.758 [2024-10-13 02:31:51.395281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.758 [2024-10-13 02:31:51.395370] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.758 [2024-10-13 02:31:51.395421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:17:32.758 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.758 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:32.758 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.758 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.758 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.758 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.017 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:33.017 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:33.017 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:33.017 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:33.017 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.017 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.017 [2024-10-13 02:31:51.451043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:33.017 [2024-10-13 02:31:51.451174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.017 [2024-10-13 02:31:51.451214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:17:33.017 [2024-10-13 02:31:51.451246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.017 [2024-10-13 02:31:51.453380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.017 [2024-10-13 02:31:51.453473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:33.017 [2024-10-13 02:31:51.453558] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:33.017 [2024-10-13 02:31:51.453614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:33.017 [2024-10-13 02:31:51.453740] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:33.017 [2024-10-13 02:31:51.453797] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.017 pt1 00:17:33.017 [2024-10-13 02:31:51.453848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:17:33.017 [2024-10-13 02:31:51.453919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:33.017 [2024-10-13 02:31:51.453999] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:17:33.017 [2024-10-13 02:31:51.454012] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:33.017 [2024-10-13 02:31:51.454113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:17:33.017 [2024-10-13 02:31:51.454177] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:17:33.017 [2024-10-13 02:31:51.454185] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:17:33.017 [2024-10-13 02:31:51.454260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.017 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.017 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:33.017 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:33.018 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.018 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.018 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.018 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.018 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:33.018 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.018 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.018 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.018 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.018 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.018 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.018 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.018 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.018 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.018 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.018 "name": "raid_bdev1", 00:17:33.018 "uuid": "d54c3be0-34ee-444a-951f-91f74f3470d2", 00:17:33.018 "strip_size_kb": 0, 00:17:33.018 "state": "online", 00:17:33.018 "raid_level": "raid1", 00:17:33.018 "superblock": true, 00:17:33.018 "num_base_bdevs": 2, 00:17:33.018 "num_base_bdevs_discovered": 1, 00:17:33.018 "num_base_bdevs_operational": 1, 00:17:33.018 "base_bdevs_list": [ 00:17:33.018 { 00:17:33.018 "name": null, 00:17:33.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.018 "is_configured": false, 00:17:33.018 "data_offset": 256, 00:17:33.018 "data_size": 7936 00:17:33.018 }, 00:17:33.018 { 00:17:33.018 "name": "pt2", 00:17:33.018 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.018 "is_configured": true, 00:17:33.018 "data_offset": 256, 00:17:33.018 "data_size": 7936 00:17:33.018 } 00:17:33.018 ] 00:17:33.018 }' 00:17:33.018 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.018 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.277 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:33.277 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.277 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.277 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:33.277 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.277 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:33.277 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:33.277 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:33.277 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.277 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.277 [2024-10-13 02:31:51.946422] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.277 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.536 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' d54c3be0-34ee-444a-951f-91f74f3470d2 '!=' d54c3be0-34ee-444a-951f-91f74f3470d2 ']' 00:17:33.536 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 98965 00:17:33.536 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98965 ']' 00:17:33.536 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98965 00:17:33.536 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:17:33.536 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:33.536 02:31:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98965 00:17:33.536 killing process with pid 98965 00:17:33.536 02:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:33.536 02:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:33.536 02:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98965' 00:17:33.536 02:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 98965 00:17:33.536 [2024-10-13 02:31:52.005182] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:33.536 [2024-10-13 02:31:52.005294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.536 02:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 98965 00:17:33.536 [2024-10-13 02:31:52.005348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.536 [2024-10-13 02:31:52.005358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:17:33.536 [2024-10-13 02:31:52.029209] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:33.795 02:31:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:17:33.795 00:17:33.795 real 0m5.005s 00:17:33.795 user 0m8.154s 00:17:33.795 sys 0m1.079s 00:17:33.795 02:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:33.795 02:31:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.795 ************************************ 00:17:33.795 END TEST raid_superblock_test_md_interleaved 00:17:33.795 ************************************ 00:17:33.796 02:31:52 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:17:33.796 02:31:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:33.796 02:31:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:33.796 02:31:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:33.796 ************************************ 00:17:33.796 START TEST raid_rebuild_test_sb_md_interleaved 00:17:33.796 ************************************ 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99277 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99277 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99277 ']' 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:33.796 02:31:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.796 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:33.796 Zero copy mechanism will not be used. 00:17:33.796 [2024-10-13 02:31:52.433438] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:33.796 [2024-10-13 02:31:52.433598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99277 ] 00:17:34.055 [2024-10-13 02:31:52.579352] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.055 [2024-10-13 02:31:52.630955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.055 [2024-10-13 02:31:52.673545] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.055 [2024-10-13 02:31:52.673579] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.622 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:34.622 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:17:34.622 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:34.622 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:17:34.622 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.622 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.622 BaseBdev1_malloc 00:17:34.623 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.623 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:34.623 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.623 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.623 [2024-10-13 02:31:53.288255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:34.623 [2024-10-13 02:31:53.288405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.623 [2024-10-13 02:31:53.288458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:34.623 [2024-10-13 02:31:53.288490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.623 [2024-10-13 02:31:53.290511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.623 [2024-10-13 02:31:53.290585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:34.623 BaseBdev1 00:17:34.623 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.623 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:34.623 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:17:34.623 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.623 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.882 BaseBdev2_malloc 00:17:34.882 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.882 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:34.882 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.882 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.882 [2024-10-13 02:31:53.328343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:34.882 [2024-10-13 02:31:53.328425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.882 [2024-10-13 02:31:53.328457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:34.882 [2024-10-13 02:31:53.328470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.882 [2024-10-13 02:31:53.331158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.882 [2024-10-13 02:31:53.331231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:34.882 BaseBdev2 00:17:34.882 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.882 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:17:34.882 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.882 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.882 spare_malloc 00:17:34.882 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.882 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:34.882 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.882 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.882 spare_delay 00:17:34.882 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.882 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:34.882 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.882 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.882 [2024-10-13 02:31:53.369128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:34.882 [2024-10-13 02:31:53.369242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.882 [2024-10-13 02:31:53.369286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:34.882 [2024-10-13 02:31:53.369314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.882 [2024-10-13 02:31:53.371279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.883 [2024-10-13 02:31:53.371345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:34.883 spare 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.883 [2024-10-13 02:31:53.381172] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.883 [2024-10-13 02:31:53.383041] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.883 [2024-10-13 02:31:53.383218] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:17:34.883 [2024-10-13 02:31:53.383232] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:34.883 [2024-10-13 02:31:53.383327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:17:34.883 [2024-10-13 02:31:53.383398] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:17:34.883 [2024-10-13 02:31:53.383411] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:17:34.883 [2024-10-13 02:31:53.383494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.883 "name": "raid_bdev1", 00:17:34.883 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:34.883 "strip_size_kb": 0, 00:17:34.883 "state": "online", 00:17:34.883 "raid_level": "raid1", 00:17:34.883 "superblock": true, 00:17:34.883 "num_base_bdevs": 2, 00:17:34.883 "num_base_bdevs_discovered": 2, 00:17:34.883 "num_base_bdevs_operational": 2, 00:17:34.883 "base_bdevs_list": [ 00:17:34.883 { 00:17:34.883 "name": "BaseBdev1", 00:17:34.883 "uuid": "8d81c9d5-b66d-585a-8306-0c415d3c1193", 00:17:34.883 "is_configured": true, 00:17:34.883 "data_offset": 256, 00:17:34.883 "data_size": 7936 00:17:34.883 }, 00:17:34.883 { 00:17:34.883 "name": "BaseBdev2", 00:17:34.883 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:34.883 "is_configured": true, 00:17:34.883 "data_offset": 256, 00:17:34.883 "data_size": 7936 00:17:34.883 } 00:17:34.883 ] 00:17:34.883 }' 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.883 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.451 [2024-10-13 02:31:53.860609] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.451 [2024-10-13 02:31:53.952178] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.451 02:31:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.451 02:31:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.451 "name": "raid_bdev1", 00:17:35.451 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:35.451 "strip_size_kb": 0, 00:17:35.451 "state": "online", 00:17:35.451 "raid_level": "raid1", 00:17:35.451 "superblock": true, 00:17:35.451 "num_base_bdevs": 2, 00:17:35.451 "num_base_bdevs_discovered": 1, 00:17:35.451 "num_base_bdevs_operational": 1, 00:17:35.451 "base_bdevs_list": [ 00:17:35.451 { 00:17:35.451 "name": null, 00:17:35.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.451 "is_configured": false, 00:17:35.451 "data_offset": 0, 00:17:35.451 "data_size": 7936 00:17:35.451 }, 00:17:35.451 { 00:17:35.451 "name": "BaseBdev2", 00:17:35.451 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:35.451 "is_configured": true, 00:17:35.451 "data_offset": 256, 00:17:35.451 "data_size": 7936 00:17:35.451 } 00:17:35.451 ] 00:17:35.452 }' 00:17:35.452 02:31:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.452 02:31:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.020 02:31:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:36.020 02:31:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.020 02:31:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.020 [2024-10-13 02:31:54.415427] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:36.020 [2024-10-13 02:31:54.418508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:17:36.020 [2024-10-13 02:31:54.420535] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:36.020 02:31:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.020 02:31:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.971 "name": "raid_bdev1", 00:17:36.971 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:36.971 "strip_size_kb": 0, 00:17:36.971 "state": "online", 00:17:36.971 "raid_level": "raid1", 00:17:36.971 "superblock": true, 00:17:36.971 "num_base_bdevs": 2, 00:17:36.971 "num_base_bdevs_discovered": 2, 00:17:36.971 "num_base_bdevs_operational": 2, 00:17:36.971 "process": { 00:17:36.971 "type": "rebuild", 00:17:36.971 "target": "spare", 00:17:36.971 "progress": { 00:17:36.971 "blocks": 2560, 00:17:36.971 "percent": 32 00:17:36.971 } 00:17:36.971 }, 00:17:36.971 "base_bdevs_list": [ 00:17:36.971 { 00:17:36.971 "name": "spare", 00:17:36.971 "uuid": "5f0c76db-ac5f-5be3-8ca0-196bdf2ed3cb", 00:17:36.971 "is_configured": true, 00:17:36.971 "data_offset": 256, 00:17:36.971 "data_size": 7936 00:17:36.971 }, 00:17:36.971 { 00:17:36.971 "name": "BaseBdev2", 00:17:36.971 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:36.971 "is_configured": true, 00:17:36.971 "data_offset": 256, 00:17:36.971 "data_size": 7936 00:17:36.971 } 00:17:36.971 ] 00:17:36.971 }' 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.971 [2024-10-13 02:31:55.583787] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.971 [2024-10-13 02:31:55.626586] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:36.971 [2024-10-13 02:31:55.626766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.971 [2024-10-13 02:31:55.626806] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.971 [2024-10-13 02:31:55.626829] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.971 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.230 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.230 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.230 "name": "raid_bdev1", 00:17:37.230 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:37.230 "strip_size_kb": 0, 00:17:37.230 "state": "online", 00:17:37.230 "raid_level": "raid1", 00:17:37.230 "superblock": true, 00:17:37.230 "num_base_bdevs": 2, 00:17:37.230 "num_base_bdevs_discovered": 1, 00:17:37.230 "num_base_bdevs_operational": 1, 00:17:37.230 "base_bdevs_list": [ 00:17:37.230 { 00:17:37.230 "name": null, 00:17:37.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.230 "is_configured": false, 00:17:37.230 "data_offset": 0, 00:17:37.230 "data_size": 7936 00:17:37.230 }, 00:17:37.230 { 00:17:37.230 "name": "BaseBdev2", 00:17:37.230 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:37.230 "is_configured": true, 00:17:37.230 "data_offset": 256, 00:17:37.230 "data_size": 7936 00:17:37.230 } 00:17:37.230 ] 00:17:37.230 }' 00:17:37.230 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.230 02:31:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.489 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:37.489 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.489 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:37.489 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:37.489 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.489 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.490 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.490 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.490 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.490 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.490 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.490 "name": "raid_bdev1", 00:17:37.490 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:37.490 "strip_size_kb": 0, 00:17:37.490 "state": "online", 00:17:37.490 "raid_level": "raid1", 00:17:37.490 "superblock": true, 00:17:37.490 "num_base_bdevs": 2, 00:17:37.490 "num_base_bdevs_discovered": 1, 00:17:37.490 "num_base_bdevs_operational": 1, 00:17:37.490 "base_bdevs_list": [ 00:17:37.490 { 00:17:37.490 "name": null, 00:17:37.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.490 "is_configured": false, 00:17:37.490 "data_offset": 0, 00:17:37.490 "data_size": 7936 00:17:37.490 }, 00:17:37.490 { 00:17:37.490 "name": "BaseBdev2", 00:17:37.490 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:37.490 "is_configured": true, 00:17:37.490 "data_offset": 256, 00:17:37.490 "data_size": 7936 00:17:37.490 } 00:17:37.490 ] 00:17:37.490 }' 00:17:37.490 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.749 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:37.749 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.749 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:37.749 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:37.749 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.749 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.749 [2024-10-13 02:31:56.229800] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:37.749 [2024-10-13 02:31:56.232842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:17:37.749 [2024-10-13 02:31:56.234844] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.749 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.749 02:31:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:38.685 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.685 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.685 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.685 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.685 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.685 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.685 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.685 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.685 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.685 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.685 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.685 "name": "raid_bdev1", 00:17:38.685 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:38.685 "strip_size_kb": 0, 00:17:38.685 "state": "online", 00:17:38.685 "raid_level": "raid1", 00:17:38.685 "superblock": true, 00:17:38.685 "num_base_bdevs": 2, 00:17:38.685 "num_base_bdevs_discovered": 2, 00:17:38.685 "num_base_bdevs_operational": 2, 00:17:38.685 "process": { 00:17:38.685 "type": "rebuild", 00:17:38.685 "target": "spare", 00:17:38.685 "progress": { 00:17:38.685 "blocks": 2560, 00:17:38.685 "percent": 32 00:17:38.685 } 00:17:38.685 }, 00:17:38.685 "base_bdevs_list": [ 00:17:38.685 { 00:17:38.685 "name": "spare", 00:17:38.685 "uuid": "5f0c76db-ac5f-5be3-8ca0-196bdf2ed3cb", 00:17:38.685 "is_configured": true, 00:17:38.685 "data_offset": 256, 00:17:38.685 "data_size": 7936 00:17:38.685 }, 00:17:38.685 { 00:17:38.685 "name": "BaseBdev2", 00:17:38.685 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:38.685 "is_configured": true, 00:17:38.685 "data_offset": 256, 00:17:38.685 "data_size": 7936 00:17:38.685 } 00:17:38.685 ] 00:17:38.685 }' 00:17:38.685 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.685 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.685 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:38.944 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=625 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.944 "name": "raid_bdev1", 00:17:38.944 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:38.944 "strip_size_kb": 0, 00:17:38.944 "state": "online", 00:17:38.944 "raid_level": "raid1", 00:17:38.944 "superblock": true, 00:17:38.944 "num_base_bdevs": 2, 00:17:38.944 "num_base_bdevs_discovered": 2, 00:17:38.944 "num_base_bdevs_operational": 2, 00:17:38.944 "process": { 00:17:38.944 "type": "rebuild", 00:17:38.944 "target": "spare", 00:17:38.944 "progress": { 00:17:38.944 "blocks": 2816, 00:17:38.944 "percent": 35 00:17:38.944 } 00:17:38.944 }, 00:17:38.944 "base_bdevs_list": [ 00:17:38.944 { 00:17:38.944 "name": "spare", 00:17:38.944 "uuid": "5f0c76db-ac5f-5be3-8ca0-196bdf2ed3cb", 00:17:38.944 "is_configured": true, 00:17:38.944 "data_offset": 256, 00:17:38.944 "data_size": 7936 00:17:38.944 }, 00:17:38.944 { 00:17:38.944 "name": "BaseBdev2", 00:17:38.944 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:38.944 "is_configured": true, 00:17:38.944 "data_offset": 256, 00:17:38.944 "data_size": 7936 00:17:38.944 } 00:17:38.944 ] 00:17:38.944 }' 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.944 02:31:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:39.880 02:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:39.880 02:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.880 02:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.880 02:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.880 02:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.880 02:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.880 02:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.881 02:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.881 02:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.881 02:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.881 02:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.881 02:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.881 "name": "raid_bdev1", 00:17:39.881 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:39.881 "strip_size_kb": 0, 00:17:39.881 "state": "online", 00:17:39.881 "raid_level": "raid1", 00:17:39.881 "superblock": true, 00:17:39.881 "num_base_bdevs": 2, 00:17:39.881 "num_base_bdevs_discovered": 2, 00:17:39.881 "num_base_bdevs_operational": 2, 00:17:39.881 "process": { 00:17:39.881 "type": "rebuild", 00:17:39.881 "target": "spare", 00:17:39.881 "progress": { 00:17:39.881 "blocks": 5632, 00:17:39.881 "percent": 70 00:17:39.881 } 00:17:39.881 }, 00:17:39.881 "base_bdevs_list": [ 00:17:39.881 { 00:17:39.881 "name": "spare", 00:17:39.881 "uuid": "5f0c76db-ac5f-5be3-8ca0-196bdf2ed3cb", 00:17:39.881 "is_configured": true, 00:17:39.881 "data_offset": 256, 00:17:39.881 "data_size": 7936 00:17:39.881 }, 00:17:39.881 { 00:17:39.881 "name": "BaseBdev2", 00:17:39.881 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:39.881 "is_configured": true, 00:17:39.881 "data_offset": 256, 00:17:39.881 "data_size": 7936 00:17:39.881 } 00:17:39.881 ] 00:17:39.881 }' 00:17:39.881 02:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.139 02:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.139 02:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.139 02:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.139 02:31:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:40.707 [2024-10-13 02:31:59.348913] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:40.707 [2024-10-13 02:31:59.349016] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:40.707 [2024-10-13 02:31:59.349169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.966 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.966 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.966 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.966 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.966 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.966 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.966 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.966 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.966 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:40.966 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.225 "name": "raid_bdev1", 00:17:41.225 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:41.225 "strip_size_kb": 0, 00:17:41.225 "state": "online", 00:17:41.225 "raid_level": "raid1", 00:17:41.225 "superblock": true, 00:17:41.225 "num_base_bdevs": 2, 00:17:41.225 "num_base_bdevs_discovered": 2, 00:17:41.225 "num_base_bdevs_operational": 2, 00:17:41.225 "base_bdevs_list": [ 00:17:41.225 { 00:17:41.225 "name": "spare", 00:17:41.225 "uuid": "5f0c76db-ac5f-5be3-8ca0-196bdf2ed3cb", 00:17:41.225 "is_configured": true, 00:17:41.225 "data_offset": 256, 00:17:41.225 "data_size": 7936 00:17:41.225 }, 00:17:41.225 { 00:17:41.225 "name": "BaseBdev2", 00:17:41.225 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:41.225 "is_configured": true, 00:17:41.225 "data_offset": 256, 00:17:41.225 "data_size": 7936 00:17:41.225 } 00:17:41.225 ] 00:17:41.225 }' 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.225 "name": "raid_bdev1", 00:17:41.225 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:41.225 "strip_size_kb": 0, 00:17:41.225 "state": "online", 00:17:41.225 "raid_level": "raid1", 00:17:41.225 "superblock": true, 00:17:41.225 "num_base_bdevs": 2, 00:17:41.225 "num_base_bdevs_discovered": 2, 00:17:41.225 "num_base_bdevs_operational": 2, 00:17:41.225 "base_bdevs_list": [ 00:17:41.225 { 00:17:41.225 "name": "spare", 00:17:41.225 "uuid": "5f0c76db-ac5f-5be3-8ca0-196bdf2ed3cb", 00:17:41.225 "is_configured": true, 00:17:41.225 "data_offset": 256, 00:17:41.225 "data_size": 7936 00:17:41.225 }, 00:17:41.225 { 00:17:41.225 "name": "BaseBdev2", 00:17:41.225 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:41.225 "is_configured": true, 00:17:41.225 "data_offset": 256, 00:17:41.225 "data_size": 7936 00:17:41.225 } 00:17:41.225 ] 00:17:41.225 }' 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.225 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.226 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.226 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:41.226 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.226 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.226 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.226 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.226 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.226 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.226 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.226 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.484 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.484 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.484 "name": "raid_bdev1", 00:17:41.484 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:41.484 "strip_size_kb": 0, 00:17:41.484 "state": "online", 00:17:41.484 "raid_level": "raid1", 00:17:41.484 "superblock": true, 00:17:41.484 "num_base_bdevs": 2, 00:17:41.484 "num_base_bdevs_discovered": 2, 00:17:41.484 "num_base_bdevs_operational": 2, 00:17:41.484 "base_bdevs_list": [ 00:17:41.484 { 00:17:41.484 "name": "spare", 00:17:41.484 "uuid": "5f0c76db-ac5f-5be3-8ca0-196bdf2ed3cb", 00:17:41.484 "is_configured": true, 00:17:41.484 "data_offset": 256, 00:17:41.484 "data_size": 7936 00:17:41.484 }, 00:17:41.484 { 00:17:41.484 "name": "BaseBdev2", 00:17:41.484 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:41.484 "is_configured": true, 00:17:41.484 "data_offset": 256, 00:17:41.484 "data_size": 7936 00:17:41.484 } 00:17:41.484 ] 00:17:41.484 }' 00:17:41.484 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.484 02:31:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.743 [2024-10-13 02:32:00.319343] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:41.743 [2024-10-13 02:32:00.319433] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:41.743 [2024-10-13 02:32:00.319561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.743 [2024-10-13 02:32:00.319668] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.743 [2024-10-13 02:32:00.319723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.743 [2024-10-13 02:32:00.395191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:41.743 [2024-10-13 02:32:00.395324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.743 [2024-10-13 02:32:00.395365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:41.743 [2024-10-13 02:32:00.395423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.743 [2024-10-13 02:32:00.397424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.743 [2024-10-13 02:32:00.397505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:41.743 [2024-10-13 02:32:00.397589] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:41.743 [2024-10-13 02:32:00.397684] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:41.743 [2024-10-13 02:32:00.397807] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:41.743 spare 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.743 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.002 [2024-10-13 02:32:00.497781] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:17:42.002 [2024-10-13 02:32:00.497929] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:42.002 [2024-10-13 02:32:00.498131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:17:42.002 [2024-10-13 02:32:00.498296] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:17:42.002 [2024-10-13 02:32:00.498336] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:17:42.002 [2024-10-13 02:32:00.498475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.002 "name": "raid_bdev1", 00:17:42.002 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:42.002 "strip_size_kb": 0, 00:17:42.002 "state": "online", 00:17:42.002 "raid_level": "raid1", 00:17:42.002 "superblock": true, 00:17:42.002 "num_base_bdevs": 2, 00:17:42.002 "num_base_bdevs_discovered": 2, 00:17:42.002 "num_base_bdevs_operational": 2, 00:17:42.002 "base_bdevs_list": [ 00:17:42.002 { 00:17:42.002 "name": "spare", 00:17:42.002 "uuid": "5f0c76db-ac5f-5be3-8ca0-196bdf2ed3cb", 00:17:42.002 "is_configured": true, 00:17:42.002 "data_offset": 256, 00:17:42.002 "data_size": 7936 00:17:42.002 }, 00:17:42.002 { 00:17:42.002 "name": "BaseBdev2", 00:17:42.002 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:42.002 "is_configured": true, 00:17:42.002 "data_offset": 256, 00:17:42.002 "data_size": 7936 00:17:42.002 } 00:17:42.002 ] 00:17:42.002 }' 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.002 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.261 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.261 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.261 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.261 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.261 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.261 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.261 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.261 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.261 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.520 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.520 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.520 "name": "raid_bdev1", 00:17:42.520 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:42.520 "strip_size_kb": 0, 00:17:42.520 "state": "online", 00:17:42.520 "raid_level": "raid1", 00:17:42.520 "superblock": true, 00:17:42.520 "num_base_bdevs": 2, 00:17:42.520 "num_base_bdevs_discovered": 2, 00:17:42.520 "num_base_bdevs_operational": 2, 00:17:42.520 "base_bdevs_list": [ 00:17:42.520 { 00:17:42.520 "name": "spare", 00:17:42.520 "uuid": "5f0c76db-ac5f-5be3-8ca0-196bdf2ed3cb", 00:17:42.520 "is_configured": true, 00:17:42.520 "data_offset": 256, 00:17:42.520 "data_size": 7936 00:17:42.520 }, 00:17:42.520 { 00:17:42.520 "name": "BaseBdev2", 00:17:42.520 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:42.520 "is_configured": true, 00:17:42.520 "data_offset": 256, 00:17:42.520 "data_size": 7936 00:17:42.520 } 00:17:42.520 ] 00:17:42.520 }' 00:17:42.520 02:32:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.520 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.520 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.520 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.520 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.520 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.520 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.520 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:42.520 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.520 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.521 [2024-10-13 02:32:01.110088] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.521 "name": "raid_bdev1", 00:17:42.521 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:42.521 "strip_size_kb": 0, 00:17:42.521 "state": "online", 00:17:42.521 "raid_level": "raid1", 00:17:42.521 "superblock": true, 00:17:42.521 "num_base_bdevs": 2, 00:17:42.521 "num_base_bdevs_discovered": 1, 00:17:42.521 "num_base_bdevs_operational": 1, 00:17:42.521 "base_bdevs_list": [ 00:17:42.521 { 00:17:42.521 "name": null, 00:17:42.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.521 "is_configured": false, 00:17:42.521 "data_offset": 0, 00:17:42.521 "data_size": 7936 00:17:42.521 }, 00:17:42.521 { 00:17:42.521 "name": "BaseBdev2", 00:17:42.521 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:42.521 "is_configured": true, 00:17:42.521 "data_offset": 256, 00:17:42.521 "data_size": 7936 00:17:42.521 } 00:17:42.521 ] 00:17:42.521 }' 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.521 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:43.099 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:43.099 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.099 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:43.099 [2024-10-13 02:32:01.529406] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:43.099 [2024-10-13 02:32:01.529691] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:43.099 [2024-10-13 02:32:01.529752] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:43.099 [2024-10-13 02:32:01.529827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:43.099 [2024-10-13 02:32:01.532665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:17:43.099 [2024-10-13 02:32:01.534675] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:43.099 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.099 02:32:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:44.037 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.037 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.037 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.037 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.037 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.037 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.037 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.037 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.037 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.037 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.037 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.037 "name": "raid_bdev1", 00:17:44.037 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:44.037 "strip_size_kb": 0, 00:17:44.037 "state": "online", 00:17:44.037 "raid_level": "raid1", 00:17:44.037 "superblock": true, 00:17:44.037 "num_base_bdevs": 2, 00:17:44.037 "num_base_bdevs_discovered": 2, 00:17:44.037 "num_base_bdevs_operational": 2, 00:17:44.037 "process": { 00:17:44.037 "type": "rebuild", 00:17:44.037 "target": "spare", 00:17:44.037 "progress": { 00:17:44.037 "blocks": 2560, 00:17:44.037 "percent": 32 00:17:44.037 } 00:17:44.037 }, 00:17:44.037 "base_bdevs_list": [ 00:17:44.037 { 00:17:44.037 "name": "spare", 00:17:44.037 "uuid": "5f0c76db-ac5f-5be3-8ca0-196bdf2ed3cb", 00:17:44.037 "is_configured": true, 00:17:44.037 "data_offset": 256, 00:17:44.037 "data_size": 7936 00:17:44.037 }, 00:17:44.037 { 00:17:44.037 "name": "BaseBdev2", 00:17:44.037 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:44.037 "is_configured": true, 00:17:44.037 "data_offset": 256, 00:17:44.037 "data_size": 7936 00:17:44.037 } 00:17:44.037 ] 00:17:44.037 }' 00:17:44.037 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.037 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.037 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.037 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.037 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:44.037 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.037 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.037 [2024-10-13 02:32:02.681924] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.296 [2024-10-13 02:32:02.739879] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:44.296 [2024-10-13 02:32:02.740066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.296 [2024-10-13 02:32:02.740106] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.296 [2024-10-13 02:32:02.740127] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.296 "name": "raid_bdev1", 00:17:44.296 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:44.296 "strip_size_kb": 0, 00:17:44.296 "state": "online", 00:17:44.296 "raid_level": "raid1", 00:17:44.296 "superblock": true, 00:17:44.296 "num_base_bdevs": 2, 00:17:44.296 "num_base_bdevs_discovered": 1, 00:17:44.296 "num_base_bdevs_operational": 1, 00:17:44.296 "base_bdevs_list": [ 00:17:44.296 { 00:17:44.296 "name": null, 00:17:44.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.296 "is_configured": false, 00:17:44.296 "data_offset": 0, 00:17:44.296 "data_size": 7936 00:17:44.296 }, 00:17:44.296 { 00:17:44.296 "name": "BaseBdev2", 00:17:44.296 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:44.296 "is_configured": true, 00:17:44.296 "data_offset": 256, 00:17:44.296 "data_size": 7936 00:17:44.296 } 00:17:44.296 ] 00:17:44.296 }' 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.296 02:32:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.555 02:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:44.555 02:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.555 02:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.555 [2024-10-13 02:32:03.235040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:44.555 [2024-10-13 02:32:03.235177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.555 [2024-10-13 02:32:03.235223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:44.555 [2024-10-13 02:32:03.235250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.555 [2024-10-13 02:32:03.235481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.555 [2024-10-13 02:32:03.235527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:44.555 [2024-10-13 02:32:03.235614] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:44.555 [2024-10-13 02:32:03.235652] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:44.555 [2024-10-13 02:32:03.235694] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:44.555 [2024-10-13 02:32:03.235736] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.814 [2024-10-13 02:32:03.238590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:17:44.814 [2024-10-13 02:32:03.240582] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:44.814 spare 00:17:44.814 02:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.814 02:32:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:45.751 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.751 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.751 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.751 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.751 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.751 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.751 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.751 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.751 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.751 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.751 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.751 "name": "raid_bdev1", 00:17:45.751 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:45.751 "strip_size_kb": 0, 00:17:45.751 "state": "online", 00:17:45.751 "raid_level": "raid1", 00:17:45.751 "superblock": true, 00:17:45.751 "num_base_bdevs": 2, 00:17:45.751 "num_base_bdevs_discovered": 2, 00:17:45.751 "num_base_bdevs_operational": 2, 00:17:45.751 "process": { 00:17:45.751 "type": "rebuild", 00:17:45.751 "target": "spare", 00:17:45.751 "progress": { 00:17:45.751 "blocks": 2560, 00:17:45.751 "percent": 32 00:17:45.751 } 00:17:45.751 }, 00:17:45.751 "base_bdevs_list": [ 00:17:45.751 { 00:17:45.751 "name": "spare", 00:17:45.751 "uuid": "5f0c76db-ac5f-5be3-8ca0-196bdf2ed3cb", 00:17:45.751 "is_configured": true, 00:17:45.751 "data_offset": 256, 00:17:45.751 "data_size": 7936 00:17:45.751 }, 00:17:45.751 { 00:17:45.751 "name": "BaseBdev2", 00:17:45.751 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:45.751 "is_configured": true, 00:17:45.751 "data_offset": 256, 00:17:45.751 "data_size": 7936 00:17:45.751 } 00:17:45.751 ] 00:17:45.751 }' 00:17:45.751 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.751 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.751 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.751 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.751 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:45.751 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.751 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.751 [2024-10-13 02:32:04.379997] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:46.010 [2024-10-13 02:32:04.445732] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:46.010 [2024-10-13 02:32:04.445922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.010 [2024-10-13 02:32:04.445940] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:46.010 [2024-10-13 02:32:04.445951] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.010 "name": "raid_bdev1", 00:17:46.010 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:46.010 "strip_size_kb": 0, 00:17:46.010 "state": "online", 00:17:46.010 "raid_level": "raid1", 00:17:46.010 "superblock": true, 00:17:46.010 "num_base_bdevs": 2, 00:17:46.010 "num_base_bdevs_discovered": 1, 00:17:46.010 "num_base_bdevs_operational": 1, 00:17:46.010 "base_bdevs_list": [ 00:17:46.010 { 00:17:46.010 "name": null, 00:17:46.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.010 "is_configured": false, 00:17:46.010 "data_offset": 0, 00:17:46.010 "data_size": 7936 00:17:46.010 }, 00:17:46.010 { 00:17:46.010 "name": "BaseBdev2", 00:17:46.010 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:46.010 "is_configured": true, 00:17:46.010 "data_offset": 256, 00:17:46.010 "data_size": 7936 00:17:46.010 } 00:17:46.010 ] 00:17:46.010 }' 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.010 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.269 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:46.269 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.269 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:46.269 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:46.269 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.269 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.269 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.269 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.269 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.269 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.269 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.269 "name": "raid_bdev1", 00:17:46.269 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:46.269 "strip_size_kb": 0, 00:17:46.269 "state": "online", 00:17:46.269 "raid_level": "raid1", 00:17:46.269 "superblock": true, 00:17:46.269 "num_base_bdevs": 2, 00:17:46.269 "num_base_bdevs_discovered": 1, 00:17:46.269 "num_base_bdevs_operational": 1, 00:17:46.269 "base_bdevs_list": [ 00:17:46.269 { 00:17:46.269 "name": null, 00:17:46.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.269 "is_configured": false, 00:17:46.269 "data_offset": 0, 00:17:46.269 "data_size": 7936 00:17:46.269 }, 00:17:46.269 { 00:17:46.269 "name": "BaseBdev2", 00:17:46.269 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:46.269 "is_configured": true, 00:17:46.269 "data_offset": 256, 00:17:46.269 "data_size": 7936 00:17:46.269 } 00:17:46.269 ] 00:17:46.269 }' 00:17:46.269 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.528 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:46.528 02:32:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.528 02:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:46.528 02:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:46.528 02:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.528 02:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.528 02:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.528 02:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:46.528 02:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.528 02:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.528 [2024-10-13 02:32:05.032824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:46.528 [2024-10-13 02:32:05.032983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.528 [2024-10-13 02:32:05.033013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:46.528 [2024-10-13 02:32:05.033025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.528 [2024-10-13 02:32:05.033215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.528 [2024-10-13 02:32:05.033231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:46.528 [2024-10-13 02:32:05.033290] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:46.528 [2024-10-13 02:32:05.033306] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:46.528 [2024-10-13 02:32:05.033314] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:46.528 [2024-10-13 02:32:05.033328] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:46.528 BaseBdev1 00:17:46.528 02:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.528 02:32:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:47.466 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:47.466 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.466 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.466 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.466 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.466 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:47.466 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.466 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.466 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.466 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.466 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.466 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.466 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.466 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.466 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.466 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.466 "name": "raid_bdev1", 00:17:47.466 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:47.466 "strip_size_kb": 0, 00:17:47.466 "state": "online", 00:17:47.466 "raid_level": "raid1", 00:17:47.466 "superblock": true, 00:17:47.466 "num_base_bdevs": 2, 00:17:47.466 "num_base_bdevs_discovered": 1, 00:17:47.466 "num_base_bdevs_operational": 1, 00:17:47.466 "base_bdevs_list": [ 00:17:47.466 { 00:17:47.466 "name": null, 00:17:47.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.466 "is_configured": false, 00:17:47.466 "data_offset": 0, 00:17:47.466 "data_size": 7936 00:17:47.466 }, 00:17:47.466 { 00:17:47.466 "name": "BaseBdev2", 00:17:47.466 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:47.466 "is_configured": true, 00:17:47.466 "data_offset": 256, 00:17:47.466 "data_size": 7936 00:17:47.466 } 00:17:47.466 ] 00:17:47.466 }' 00:17:47.466 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.466 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.034 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:48.034 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.034 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:48.034 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:48.034 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.034 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.034 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.034 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.034 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.034 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.034 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.034 "name": "raid_bdev1", 00:17:48.034 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:48.034 "strip_size_kb": 0, 00:17:48.034 "state": "online", 00:17:48.034 "raid_level": "raid1", 00:17:48.034 "superblock": true, 00:17:48.034 "num_base_bdevs": 2, 00:17:48.034 "num_base_bdevs_discovered": 1, 00:17:48.034 "num_base_bdevs_operational": 1, 00:17:48.035 "base_bdevs_list": [ 00:17:48.035 { 00:17:48.035 "name": null, 00:17:48.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.035 "is_configured": false, 00:17:48.035 "data_offset": 0, 00:17:48.035 "data_size": 7936 00:17:48.035 }, 00:17:48.035 { 00:17:48.035 "name": "BaseBdev2", 00:17:48.035 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:48.035 "is_configured": true, 00:17:48.035 "data_offset": 256, 00:17:48.035 "data_size": 7936 00:17:48.035 } 00:17:48.035 ] 00:17:48.035 }' 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.035 [2024-10-13 02:32:06.678049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:48.035 [2024-10-13 02:32:06.678338] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:48.035 [2024-10-13 02:32:06.678397] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:48.035 request: 00:17:48.035 { 00:17:48.035 "base_bdev": "BaseBdev1", 00:17:48.035 "raid_bdev": "raid_bdev1", 00:17:48.035 "method": "bdev_raid_add_base_bdev", 00:17:48.035 "req_id": 1 00:17:48.035 } 00:17:48.035 Got JSON-RPC error response 00:17:48.035 response: 00:17:48.035 { 00:17:48.035 "code": -22, 00:17:48.035 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:48.035 } 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:48.035 02:32:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:49.412 02:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.412 02:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.412 02:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.412 02:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.412 02:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.412 02:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:49.412 02:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.412 02:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.412 02:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.412 02:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.412 02:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.412 02:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.412 02:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.412 02:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.412 02:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.412 02:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.412 "name": "raid_bdev1", 00:17:49.412 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:49.412 "strip_size_kb": 0, 00:17:49.412 "state": "online", 00:17:49.412 "raid_level": "raid1", 00:17:49.412 "superblock": true, 00:17:49.412 "num_base_bdevs": 2, 00:17:49.412 "num_base_bdevs_discovered": 1, 00:17:49.412 "num_base_bdevs_operational": 1, 00:17:49.412 "base_bdevs_list": [ 00:17:49.412 { 00:17:49.412 "name": null, 00:17:49.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.412 "is_configured": false, 00:17:49.412 "data_offset": 0, 00:17:49.412 "data_size": 7936 00:17:49.412 }, 00:17:49.412 { 00:17:49.412 "name": "BaseBdev2", 00:17:49.412 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:49.412 "is_configured": true, 00:17:49.412 "data_offset": 256, 00:17:49.412 "data_size": 7936 00:17:49.412 } 00:17:49.412 ] 00:17:49.412 }' 00:17:49.412 02:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.412 02:32:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.670 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.670 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.670 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.671 "name": "raid_bdev1", 00:17:49.671 "uuid": "a75c40a8-fcb5-4293-9a4b-868334338088", 00:17:49.671 "strip_size_kb": 0, 00:17:49.671 "state": "online", 00:17:49.671 "raid_level": "raid1", 00:17:49.671 "superblock": true, 00:17:49.671 "num_base_bdevs": 2, 00:17:49.671 "num_base_bdevs_discovered": 1, 00:17:49.671 "num_base_bdevs_operational": 1, 00:17:49.671 "base_bdevs_list": [ 00:17:49.671 { 00:17:49.671 "name": null, 00:17:49.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.671 "is_configured": false, 00:17:49.671 "data_offset": 0, 00:17:49.671 "data_size": 7936 00:17:49.671 }, 00:17:49.671 { 00:17:49.671 "name": "BaseBdev2", 00:17:49.671 "uuid": "2fe49671-7479-5c00-a799-42a6629d9284", 00:17:49.671 "is_configured": true, 00:17:49.671 "data_offset": 256, 00:17:49.671 "data_size": 7936 00:17:49.671 } 00:17:49.671 ] 00:17:49.671 }' 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99277 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99277 ']' 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99277 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99277 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:49.671 killing process with pid 99277 00:17:49.671 Received shutdown signal, test time was about 60.000000 seconds 00:17:49.671 00:17:49.671 Latency(us) 00:17:49.671 [2024-10-13T02:32:08.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.671 [2024-10-13T02:32:08.355Z] =================================================================================================================== 00:17:49.671 [2024-10-13T02:32:08.355Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99277' 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99277 00:17:49.671 [2024-10-13 02:32:08.330437] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:49.671 [2024-10-13 02:32:08.330570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.671 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99277 00:17:49.671 [2024-10-13 02:32:08.330625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.671 [2024-10-13 02:32:08.330634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:17:49.939 [2024-10-13 02:32:08.364067] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:49.939 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:17:49.939 00:17:49.939 real 0m16.259s 00:17:49.939 user 0m21.735s 00:17:49.939 sys 0m1.666s 00:17:49.939 ************************************ 00:17:49.939 END TEST raid_rebuild_test_sb_md_interleaved 00:17:49.939 ************************************ 00:17:49.939 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:49.939 02:32:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.223 02:32:08 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:17:50.223 02:32:08 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:17:50.223 02:32:08 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99277 ']' 00:17:50.223 02:32:08 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99277 00:17:50.223 02:32:08 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:17:50.223 00:17:50.223 real 10m6.255s 00:17:50.223 user 14m16.304s 00:17:50.223 sys 1m53.651s 00:17:50.223 02:32:08 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:50.223 ************************************ 00:17:50.223 END TEST bdev_raid 00:17:50.223 ************************************ 00:17:50.223 02:32:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:50.223 02:32:08 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:50.223 02:32:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:50.223 02:32:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:50.223 02:32:08 -- common/autotest_common.sh@10 -- # set +x 00:17:50.223 ************************************ 00:17:50.223 START TEST spdkcli_raid 00:17:50.223 ************************************ 00:17:50.223 02:32:08 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:50.223 * Looking for test storage... 00:17:50.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:50.223 02:32:08 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:50.223 02:32:08 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:17:50.223 02:32:08 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:50.495 02:32:08 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:50.495 02:32:08 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:17:50.495 02:32:08 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.495 02:32:08 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:50.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.495 --rc genhtml_branch_coverage=1 00:17:50.495 --rc genhtml_function_coverage=1 00:17:50.495 --rc genhtml_legend=1 00:17:50.495 --rc geninfo_all_blocks=1 00:17:50.495 --rc geninfo_unexecuted_blocks=1 00:17:50.495 00:17:50.495 ' 00:17:50.495 02:32:08 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:50.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.495 --rc genhtml_branch_coverage=1 00:17:50.495 --rc genhtml_function_coverage=1 00:17:50.495 --rc genhtml_legend=1 00:17:50.495 --rc geninfo_all_blocks=1 00:17:50.495 --rc geninfo_unexecuted_blocks=1 00:17:50.495 00:17:50.495 ' 00:17:50.496 02:32:08 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:50.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.496 --rc genhtml_branch_coverage=1 00:17:50.496 --rc genhtml_function_coverage=1 00:17:50.496 --rc genhtml_legend=1 00:17:50.496 --rc geninfo_all_blocks=1 00:17:50.496 --rc geninfo_unexecuted_blocks=1 00:17:50.496 00:17:50.496 ' 00:17:50.496 02:32:08 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:50.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.496 --rc genhtml_branch_coverage=1 00:17:50.496 --rc genhtml_function_coverage=1 00:17:50.496 --rc genhtml_legend=1 00:17:50.496 --rc geninfo_all_blocks=1 00:17:50.496 --rc geninfo_unexecuted_blocks=1 00:17:50.496 00:17:50.496 ' 00:17:50.496 02:32:08 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:50.496 02:32:08 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:50.496 02:32:08 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:50.496 02:32:08 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:17:50.496 02:32:08 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:17:50.496 02:32:08 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:17:50.496 02:32:08 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:17:50.496 02:32:08 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:50.496 02:32:08 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:50.496 02:32:08 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:50.496 02:32:08 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:50.496 02:32:08 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:50.496 02:32:08 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:50.496 02:32:08 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:17:50.496 02:32:08 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:17:50.496 02:32:09 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:50.496 02:32:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:50.496 02:32:09 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:17:50.496 02:32:09 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=99950 00:17:50.496 02:32:09 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:50.496 02:32:09 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 99950 00:17:50.496 02:32:09 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 99950 ']' 00:17:50.496 02:32:09 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.496 02:32:09 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:50.496 02:32:09 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.496 02:32:09 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:50.496 02:32:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:50.496 [2024-10-13 02:32:09.105031] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:50.496 [2024-10-13 02:32:09.105250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99950 ] 00:17:50.755 [2024-10-13 02:32:09.241842] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:50.755 [2024-10-13 02:32:09.294339] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.755 [2024-10-13 02:32:09.294433] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.323 02:32:09 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:51.323 02:32:09 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:17:51.323 02:32:09 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:17:51.323 02:32:09 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:51.323 02:32:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:51.323 02:32:10 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:17:51.323 02:32:10 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:51.323 02:32:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:51.582 02:32:10 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:17:51.582 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:17:51.582 ' 00:17:52.960 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:17:52.960 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:17:53.219 02:32:11 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:17:53.219 02:32:11 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:53.219 02:32:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:53.219 02:32:11 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:17:53.219 02:32:11 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:53.219 02:32:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:53.219 02:32:11 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:17:53.219 ' 00:17:54.155 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:17:54.414 02:32:12 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:17:54.414 02:32:12 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:54.414 02:32:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.414 02:32:12 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:17:54.414 02:32:12 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:54.414 02:32:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.414 02:32:12 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:17:54.414 02:32:12 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:17:54.982 02:32:13 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:17:54.982 02:32:13 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:17:54.982 02:32:13 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:17:54.982 02:32:13 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:54.982 02:32:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.982 02:32:13 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:17:54.982 02:32:13 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:54.982 02:32:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.982 02:32:13 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:17:54.982 ' 00:17:55.918 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:17:56.176 02:32:14 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:17:56.176 02:32:14 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:56.176 02:32:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:56.176 02:32:14 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:17:56.176 02:32:14 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:56.176 02:32:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:56.176 02:32:14 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:17:56.176 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:17:56.176 ' 00:17:57.553 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:17:57.553 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:17:57.553 02:32:16 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:17:57.553 02:32:16 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:57.553 02:32:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:57.811 02:32:16 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 99950 00:17:57.811 02:32:16 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 99950 ']' 00:17:57.811 02:32:16 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 99950 00:17:57.811 02:32:16 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:17:57.811 02:32:16 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:57.811 02:32:16 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99950 00:17:57.811 killing process with pid 99950 00:17:57.811 02:32:16 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:57.811 02:32:16 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:57.811 02:32:16 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99950' 00:17:57.811 02:32:16 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 99950 00:17:57.811 02:32:16 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 99950 00:17:58.072 02:32:16 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:17:58.072 02:32:16 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 99950 ']' 00:17:58.072 Process with pid 99950 is not found 00:17:58.072 02:32:16 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 99950 00:17:58.072 02:32:16 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 99950 ']' 00:17:58.072 02:32:16 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 99950 00:17:58.072 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (99950) - No such process 00:17:58.072 02:32:16 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 99950 is not found' 00:17:58.072 02:32:16 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:17:58.072 02:32:16 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:17:58.073 02:32:16 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:17:58.073 02:32:16 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:17:58.073 00:17:58.073 real 0m7.957s 00:17:58.073 user 0m16.984s 00:17:58.073 sys 0m1.131s 00:17:58.073 02:32:16 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:58.073 02:32:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.073 ************************************ 00:17:58.073 END TEST spdkcli_raid 00:17:58.073 ************************************ 00:17:58.330 02:32:16 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:58.330 02:32:16 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:58.330 02:32:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:58.330 02:32:16 -- common/autotest_common.sh@10 -- # set +x 00:17:58.330 ************************************ 00:17:58.330 START TEST blockdev_raid5f 00:17:58.330 ************************************ 00:17:58.330 02:32:16 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:58.330 * Looking for test storage... 00:17:58.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:58.330 02:32:16 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:58.330 02:32:16 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:17:58.330 02:32:16 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:58.330 02:32:16 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.330 02:32:16 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:17:58.331 02:32:16 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:17:58.331 02:32:16 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.331 02:32:16 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:17:58.331 02:32:16 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.331 02:32:16 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.331 02:32:16 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.331 02:32:16 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:17:58.331 02:32:16 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.331 02:32:16 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:58.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.331 --rc genhtml_branch_coverage=1 00:17:58.331 --rc genhtml_function_coverage=1 00:17:58.331 --rc genhtml_legend=1 00:17:58.331 --rc geninfo_all_blocks=1 00:17:58.331 --rc geninfo_unexecuted_blocks=1 00:17:58.331 00:17:58.331 ' 00:17:58.331 02:32:16 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:58.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.331 --rc genhtml_branch_coverage=1 00:17:58.331 --rc genhtml_function_coverage=1 00:17:58.331 --rc genhtml_legend=1 00:17:58.331 --rc geninfo_all_blocks=1 00:17:58.331 --rc geninfo_unexecuted_blocks=1 00:17:58.331 00:17:58.331 ' 00:17:58.331 02:32:16 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:58.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.331 --rc genhtml_branch_coverage=1 00:17:58.331 --rc genhtml_function_coverage=1 00:17:58.331 --rc genhtml_legend=1 00:17:58.331 --rc geninfo_all_blocks=1 00:17:58.331 --rc geninfo_unexecuted_blocks=1 00:17:58.331 00:17:58.331 ' 00:17:58.331 02:32:16 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:58.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.331 --rc genhtml_branch_coverage=1 00:17:58.331 --rc genhtml_function_coverage=1 00:17:58.331 --rc genhtml_legend=1 00:17:58.331 --rc geninfo_all_blocks=1 00:17:58.331 --rc geninfo_unexecuted_blocks=1 00:17:58.331 00:17:58.331 ' 00:17:58.331 02:32:16 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:58.331 02:32:16 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:17:58.331 02:32:16 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:58.331 02:32:16 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:58.331 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:58.331 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:58.331 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:58.331 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:58.331 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:17:58.331 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:17:58.331 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:17:58.331 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:17:58.331 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:17:58.588 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:17:58.589 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:17:58.589 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:17:58.589 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:17:58.589 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:17:58.589 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:17:58.589 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:17:58.589 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:17:58.589 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:17:58.589 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:17:58.589 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:17:58.589 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100203 00:17:58.589 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:58.589 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:58.589 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100203 00:17:58.589 02:32:17 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 100203 ']' 00:17:58.589 02:32:17 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.589 02:32:17 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:58.589 02:32:17 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.589 02:32:17 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:58.589 02:32:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:58.589 [2024-10-13 02:32:17.117422] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:58.589 [2024-10-13 02:32:17.117557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100203 ] 00:17:58.589 [2024-10-13 02:32:17.262855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.847 [2024-10-13 02:32:17.314489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.414 02:32:17 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:59.414 02:32:17 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:17:59.414 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:17:59.414 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:17:59.414 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:17:59.414 02:32:17 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.414 02:32:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:59.414 Malloc0 00:17:59.414 Malloc1 00:17:59.414 Malloc2 00:17:59.414 02:32:17 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.414 02:32:17 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:17:59.415 02:32:17 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.415 02:32:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:59.415 02:32:18 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.415 02:32:18 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:17:59.415 02:32:18 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:17:59.415 02:32:18 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.415 02:32:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:59.415 02:32:18 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.415 02:32:18 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:17:59.415 02:32:18 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.415 02:32:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:59.415 02:32:18 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.415 02:32:18 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:59.415 02:32:18 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.415 02:32:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:59.415 02:32:18 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.415 02:32:18 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:17:59.415 02:32:18 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:17:59.415 02:32:18 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:17:59.415 02:32:18 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.415 02:32:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:59.673 02:32:18 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.673 02:32:18 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:17:59.673 02:32:18 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:17:59.673 02:32:18 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "2e5654e1-6212-4d0c-8b89-38e256b4cd47"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2e5654e1-6212-4d0c-8b89-38e256b4cd47",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "2e5654e1-6212-4d0c-8b89-38e256b4cd47",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "0c3b3406-5247-456e-ac2e-62916bd0c2f8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "fb2f028a-ddd4-4715-855a-8b03d81eab7a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "22d9dc93-e529-4558-984a-54925a48f7b2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:59.673 02:32:18 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:17:59.673 02:32:18 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:17:59.673 02:32:18 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:17:59.673 02:32:18 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 100203 00:17:59.673 02:32:18 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 100203 ']' 00:17:59.673 02:32:18 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 100203 00:17:59.673 02:32:18 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:17:59.673 02:32:18 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:59.673 02:32:18 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100203 00:17:59.673 killing process with pid 100203 00:17:59.673 02:32:18 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:59.673 02:32:18 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:59.673 02:32:18 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100203' 00:17:59.673 02:32:18 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 100203 00:17:59.673 02:32:18 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 100203 00:18:00.240 02:32:18 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:00.240 02:32:18 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:00.240 02:32:18 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:00.240 02:32:18 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:00.240 02:32:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:00.240 ************************************ 00:18:00.240 START TEST bdev_hello_world 00:18:00.240 ************************************ 00:18:00.240 02:32:18 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:00.240 [2024-10-13 02:32:18.709444] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:00.240 [2024-10-13 02:32:18.709638] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100243 ] 00:18:00.240 [2024-10-13 02:32:18.854466] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.240 [2024-10-13 02:32:18.906415] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.498 [2024-10-13 02:32:19.090638] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:00.498 [2024-10-13 02:32:19.090779] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:00.498 [2024-10-13 02:32:19.090812] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:00.498 [2024-10-13 02:32:19.091172] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:00.499 [2024-10-13 02:32:19.091352] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:00.499 [2024-10-13 02:32:19.091411] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:00.499 [2024-10-13 02:32:19.091507] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:00.499 00:18:00.499 [2024-10-13 02:32:19.091558] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:00.757 00:18:00.757 real 0m0.713s 00:18:00.757 user 0m0.408s 00:18:00.757 sys 0m0.190s 00:18:00.757 02:32:19 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.757 ************************************ 00:18:00.757 END TEST bdev_hello_world 00:18:00.757 ************************************ 00:18:00.757 02:32:19 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:00.757 02:32:19 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:18:00.757 02:32:19 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:00.757 02:32:19 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:00.758 02:32:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:00.758 ************************************ 00:18:00.758 START TEST bdev_bounds 00:18:00.758 ************************************ 00:18:00.758 02:32:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:18:00.758 02:32:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100274 00:18:00.758 02:32:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:00.758 02:32:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:00.758 02:32:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100274' 00:18:00.758 Process bdevio pid: 100274 00:18:00.758 02:32:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100274 00:18:00.758 02:32:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 100274 ']' 00:18:00.758 02:32:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.758 02:32:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:00.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.758 02:32:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.758 02:32:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:00.758 02:32:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:01.017 [2024-10-13 02:32:19.492759] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:01.017 [2024-10-13 02:32:19.492902] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100274 ] 00:18:01.017 [2024-10-13 02:32:19.638978] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:01.017 [2024-10-13 02:32:19.692488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.017 [2024-10-13 02:32:19.692582] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.017 [2024-10-13 02:32:19.692723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.953 02:32:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:01.953 02:32:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:18:01.953 02:32:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:01.953 I/O targets: 00:18:01.953 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:01.953 00:18:01.953 00:18:01.953 CUnit - A unit testing framework for C - Version 2.1-3 00:18:01.953 http://cunit.sourceforge.net/ 00:18:01.953 00:18:01.953 00:18:01.953 Suite: bdevio tests on: raid5f 00:18:01.953 Test: blockdev write read block ...passed 00:18:01.953 Test: blockdev write zeroes read block ...passed 00:18:01.953 Test: blockdev write zeroes read no split ...passed 00:18:01.953 Test: blockdev write zeroes read split ...passed 00:18:01.953 Test: blockdev write zeroes read split partial ...passed 00:18:01.953 Test: blockdev reset ...passed 00:18:01.953 Test: blockdev write read 8 blocks ...passed 00:18:01.953 Test: blockdev write read size > 128k ...passed 00:18:01.953 Test: blockdev write read invalid size ...passed 00:18:01.953 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:01.953 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:01.953 Test: blockdev write read max offset ...passed 00:18:01.953 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:01.953 Test: blockdev writev readv 8 blocks ...passed 00:18:01.953 Test: blockdev writev readv 30 x 1block ...passed 00:18:01.953 Test: blockdev writev readv block ...passed 00:18:01.953 Test: blockdev writev readv size > 128k ...passed 00:18:01.953 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:01.953 Test: blockdev comparev and writev ...passed 00:18:01.953 Test: blockdev nvme passthru rw ...passed 00:18:01.953 Test: blockdev nvme passthru vendor specific ...passed 00:18:01.953 Test: blockdev nvme admin passthru ...passed 00:18:01.953 Test: blockdev copy ...passed 00:18:01.953 00:18:01.953 Run Summary: Type Total Ran Passed Failed Inactive 00:18:01.953 suites 1 1 n/a 0 0 00:18:01.953 tests 23 23 23 0 0 00:18:01.953 asserts 130 130 130 0 n/a 00:18:01.953 00:18:01.953 Elapsed time = 0.333 seconds 00:18:01.953 0 00:18:02.212 02:32:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100274 00:18:02.212 02:32:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 100274 ']' 00:18:02.212 02:32:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 100274 00:18:02.212 02:32:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:18:02.212 02:32:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:02.212 02:32:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100274 00:18:02.212 02:32:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:02.212 killing process with pid 100274 00:18:02.212 02:32:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:02.212 02:32:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100274' 00:18:02.212 02:32:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 100274 00:18:02.212 02:32:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 100274 00:18:02.471 ************************************ 00:18:02.471 END TEST bdev_bounds 00:18:02.471 ************************************ 00:18:02.471 02:32:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:02.471 00:18:02.471 real 0m1.532s 00:18:02.471 user 0m3.733s 00:18:02.471 sys 0m0.377s 00:18:02.471 02:32:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:02.471 02:32:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:02.471 02:32:21 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:02.471 02:32:21 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:02.471 02:32:21 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:02.471 02:32:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:02.471 ************************************ 00:18:02.471 START TEST bdev_nbd 00:18:02.471 ************************************ 00:18:02.471 02:32:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:02.471 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:02.471 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:02.471 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:02.471 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100321 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100321 /var/tmp/spdk-nbd.sock 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 100321 ']' 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:02.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:02.472 02:32:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:02.472 [2024-10-13 02:32:21.117911] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:02.472 [2024-10-13 02:32:21.118173] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.731 [2024-10-13 02:32:21.267037] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.731 [2024-10-13 02:32:21.319368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.300 02:32:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:03.300 02:32:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:18:03.300 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:18:03.301 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:03.301 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:18:03.301 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:03.301 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:18:03.301 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:03.301 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:18:03.301 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:03.301 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:03.301 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:03.301 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:03.301 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:03.301 02:32:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:03.570 1+0 records in 00:18:03.570 1+0 records out 00:18:03.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355133 s, 11.5 MB/s 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:03.570 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:03.845 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:03.845 { 00:18:03.845 "nbd_device": "/dev/nbd0", 00:18:03.845 "bdev_name": "raid5f" 00:18:03.845 } 00:18:03.845 ]' 00:18:03.845 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:03.845 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:03.845 { 00:18:03.845 "nbd_device": "/dev/nbd0", 00:18:03.845 "bdev_name": "raid5f" 00:18:03.845 } 00:18:03.845 ]' 00:18:03.845 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:03.845 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:03.845 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:03.845 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:03.845 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:03.845 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:03.845 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:03.845 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:04.104 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:04.104 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:04.104 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:04.104 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:04.104 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:04.104 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:04.104 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:04.104 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:04.104 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:04.104 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:04.104 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:04.363 02:32:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:18:04.622 /dev/nbd0 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:04.622 1+0 records in 00:18:04.622 1+0 records out 00:18:04.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451972 s, 9.1 MB/s 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:04.622 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:04.881 { 00:18:04.881 "nbd_device": "/dev/nbd0", 00:18:04.881 "bdev_name": "raid5f" 00:18:04.881 } 00:18:04.881 ]' 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:04.881 { 00:18:04.881 "nbd_device": "/dev/nbd0", 00:18:04.881 "bdev_name": "raid5f" 00:18:04.881 } 00:18:04.881 ]' 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:04.881 256+0 records in 00:18:04.881 256+0 records out 00:18:04.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138076 s, 75.9 MB/s 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:04.881 256+0 records in 00:18:04.881 256+0 records out 00:18:04.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278339 s, 37.7 MB/s 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:04.881 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:04.882 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:04.882 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:04.882 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:04.882 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:04.882 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:04.882 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:04.882 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:04.882 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:04.882 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:04.882 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:04.882 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:04.882 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:04.882 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:04.882 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:05.140 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:05.141 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:05.141 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:05.141 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.141 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.141 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:05.141 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:05.141 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.141 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:05.141 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:05.141 02:32:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:05.399 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:05.399 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:05.399 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:05.399 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:05.399 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:05.399 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:05.399 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:05.399 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:05.399 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:05.399 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:05.399 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:05.399 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:05.399 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:05.399 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:05.399 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:05.399 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:05.657 malloc_lvol_verify 00:18:05.657 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:05.915 6a16966e-6694-406e-84b8-f1b600dd07af 00:18:05.915 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:06.183 f75a31f0-99d2-4bcd-b9d4-166acb609c6b 00:18:06.183 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:06.183 /dev/nbd0 00:18:06.451 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:06.451 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:06.451 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:06.451 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:06.451 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:06.451 mke2fs 1.47.0 (5-Feb-2023) 00:18:06.451 Discarding device blocks: 0/4096 done 00:18:06.451 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:06.451 00:18:06.451 Allocating group tables: 0/1 done 00:18:06.451 Writing inode tables: 0/1 done 00:18:06.451 Creating journal (1024 blocks): done 00:18:06.451 Writing superblocks and filesystem accounting information: 0/1 done 00:18:06.451 00:18:06.451 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:06.451 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:06.451 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:06.451 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:06.451 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:06.451 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:06.451 02:32:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:06.451 02:32:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:06.451 02:32:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:06.451 02:32:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:06.451 02:32:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:06.451 02:32:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:06.451 02:32:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:06.451 02:32:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:06.451 02:32:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:06.451 02:32:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100321 00:18:06.451 02:32:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 100321 ']' 00:18:06.451 02:32:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 100321 00:18:06.710 02:32:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:18:06.710 02:32:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:06.710 02:32:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100321 00:18:06.710 killing process with pid 100321 00:18:06.710 02:32:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:06.710 02:32:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:06.710 02:32:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100321' 00:18:06.710 02:32:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 100321 00:18:06.710 02:32:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 100321 00:18:06.969 02:32:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:06.969 00:18:06.969 real 0m4.449s 00:18:06.969 user 0m6.502s 00:18:06.969 sys 0m1.277s 00:18:06.969 ************************************ 00:18:06.969 END TEST bdev_nbd 00:18:06.969 ************************************ 00:18:06.969 02:32:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:06.969 02:32:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:06.969 02:32:25 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:18:06.969 02:32:25 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:18:06.969 02:32:25 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:18:06.969 02:32:25 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:18:06.969 02:32:25 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:06.969 02:32:25 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:06.969 02:32:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:06.969 ************************************ 00:18:06.969 START TEST bdev_fio 00:18:06.969 ************************************ 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:06.969 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:18:06.969 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:07.228 ************************************ 00:18:07.228 START TEST bdev_fio_rw_verify 00:18:07.228 ************************************ 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:07.228 02:32:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:07.487 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:07.487 fio-3.35 00:18:07.487 Starting 1 thread 00:18:19.696 00:18:19.696 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100502: Sun Oct 13 02:32:36 2024 00:18:19.696 read: IOPS=11.2k, BW=43.6MiB/s (45.7MB/s)(436MiB/10001msec) 00:18:19.696 slat (nsec): min=18767, max=97948, avg=20852.68, stdev=2195.19 00:18:19.696 clat (usec): min=10, max=360, avg=143.61, stdev=50.08 00:18:19.696 lat (usec): min=30, max=381, avg=164.46, stdev=50.45 00:18:19.696 clat percentiles (usec): 00:18:19.696 | 50.000th=[ 145], 99.000th=[ 247], 99.900th=[ 277], 99.990th=[ 318], 00:18:19.696 | 99.999th=[ 359] 00:18:19.696 write: IOPS=11.7k, BW=45.8MiB/s (48.0MB/s)(452MiB/9880msec); 0 zone resets 00:18:19.696 slat (usec): min=8, max=254, avg=18.24, stdev= 4.25 00:18:19.696 clat (usec): min=63, max=1795, avg=329.70, stdev=49.05 00:18:19.696 lat (usec): min=80, max=2050, avg=347.93, stdev=50.41 00:18:19.696 clat percentiles (usec): 00:18:19.696 | 50.000th=[ 334], 99.000th=[ 437], 99.900th=[ 635], 99.990th=[ 1401], 00:18:19.696 | 99.999th=[ 1713] 00:18:19.696 bw ( KiB/s): min=43120, max=51240, per=98.71%, avg=46295.16, stdev=1756.43, samples=19 00:18:19.696 iops : min=10780, max=12810, avg=11573.79, stdev=439.11, samples=19 00:18:19.696 lat (usec) : 20=0.01%, 50=0.01%, 100=11.57%, 250=39.15%, 500=49.19% 00:18:19.696 lat (usec) : 750=0.06%, 1000=0.02% 00:18:19.696 lat (msec) : 2=0.01% 00:18:19.696 cpu : usr=98.90%, sys=0.39%, ctx=36, majf=0, minf=12374 00:18:19.696 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:19.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.696 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.696 issued rwts: total=111628,115839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.696 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:19.696 00:18:19.696 Run status group 0 (all jobs): 00:18:19.696 READ: bw=43.6MiB/s (45.7MB/s), 43.6MiB/s-43.6MiB/s (45.7MB/s-45.7MB/s), io=436MiB (457MB), run=10001-10001msec 00:18:19.696 WRITE: bw=45.8MiB/s (48.0MB/s), 45.8MiB/s-45.8MiB/s (48.0MB/s-48.0MB/s), io=452MiB (474MB), run=9880-9880msec 00:18:19.696 ----------------------------------------------------- 00:18:19.696 Suppressions used: 00:18:19.696 count bytes template 00:18:19.696 1 7 /usr/src/fio/parse.c 00:18:19.696 731 70176 /usr/src/fio/iolog.c 00:18:19.696 1 8 libtcmalloc_minimal.so 00:18:19.696 1 904 libcrypto.so 00:18:19.696 ----------------------------------------------------- 00:18:19.696 00:18:19.696 00:18:19.696 real 0m11.306s 00:18:19.696 user 0m11.576s 00:18:19.696 sys 0m0.661s 00:18:19.696 02:32:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:19.696 ************************************ 00:18:19.696 02:32:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:19.696 END TEST bdev_fio_rw_verify 00:18:19.696 ************************************ 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "2e5654e1-6212-4d0c-8b89-38e256b4cd47"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2e5654e1-6212-4d0c-8b89-38e256b4cd47",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "2e5654e1-6212-4d0c-8b89-38e256b4cd47",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "0c3b3406-5247-456e-ac2e-62916bd0c2f8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "fb2f028a-ddd4-4715-855a-8b03d81eab7a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "22d9dc93-e529-4558-984a-54925a48f7b2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:19.696 /home/vagrant/spdk_repo/spdk 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:19.696 00:18:19.696 real 0m11.579s 00:18:19.696 user 0m11.699s 00:18:19.696 sys 0m0.783s 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:19.696 02:32:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:19.696 ************************************ 00:18:19.696 END TEST bdev_fio 00:18:19.696 ************************************ 00:18:19.696 02:32:37 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:19.696 02:32:37 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:19.696 02:32:37 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:18:19.696 02:32:37 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:19.696 02:32:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:19.696 ************************************ 00:18:19.696 START TEST bdev_verify 00:18:19.696 ************************************ 00:18:19.696 02:32:37 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:19.696 [2024-10-13 02:32:37.262170] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:19.696 [2024-10-13 02:32:37.262308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100661 ] 00:18:19.696 [2024-10-13 02:32:37.410665] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:19.696 [2024-10-13 02:32:37.464231] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.696 [2024-10-13 02:32:37.464353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.696 Running I/O for 5 seconds... 00:18:21.199 13661.00 IOPS, 53.36 MiB/s [2024-10-13T02:32:40.843Z] 14236.00 IOPS, 55.61 MiB/s [2024-10-13T02:32:41.780Z] 14497.67 IOPS, 56.63 MiB/s [2024-10-13T02:32:42.717Z] 14800.00 IOPS, 57.81 MiB/s [2024-10-13T02:32:42.717Z] 14892.60 IOPS, 58.17 MiB/s 00:18:24.033 Latency(us) 00:18:24.033 [2024-10-13T02:32:42.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.033 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:24.033 Verification LBA range: start 0x0 length 0x2000 00:18:24.033 raid5f : 5.01 7417.68 28.98 0.00 0.00 25915.29 194.96 21978.89 00:18:24.033 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:24.033 Verification LBA range: start 0x2000 length 0x2000 00:18:24.033 raid5f : 5.01 7467.46 29.17 0.00 0.00 25622.56 1552.54 22093.36 00:18:24.033 [2024-10-13T02:32:42.717Z] =================================================================================================================== 00:18:24.033 [2024-10-13T02:32:42.717Z] Total : 14885.14 58.15 0.00 0.00 25768.48 194.96 22093.36 00:18:24.292 00:18:24.292 real 0m5.742s 00:18:24.292 user 0m10.680s 00:18:24.292 sys 0m0.229s 00:18:24.292 02:32:42 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:24.292 02:32:42 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:24.292 ************************************ 00:18:24.292 END TEST bdev_verify 00:18:24.292 ************************************ 00:18:24.550 02:32:42 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:24.550 02:32:42 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:18:24.550 02:32:42 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:24.550 02:32:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:24.550 ************************************ 00:18:24.550 START TEST bdev_verify_big_io 00:18:24.550 ************************************ 00:18:24.550 02:32:42 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:24.550 [2024-10-13 02:32:43.081066] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:24.550 [2024-10-13 02:32:43.081224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100742 ] 00:18:24.550 [2024-10-13 02:32:43.230140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:24.809 [2024-10-13 02:32:43.282061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.809 [2024-10-13 02:32:43.282189] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.809 Running I/O for 5 seconds... 00:18:27.121 756.00 IOPS, 47.25 MiB/s [2024-10-13T02:32:46.742Z] 761.00 IOPS, 47.56 MiB/s [2024-10-13T02:32:47.678Z] 802.67 IOPS, 50.17 MiB/s [2024-10-13T02:32:48.612Z] 825.00 IOPS, 51.56 MiB/s [2024-10-13T02:32:48.870Z] 862.40 IOPS, 53.90 MiB/s 00:18:30.186 Latency(us) 00:18:30.186 [2024-10-13T02:32:48.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.186 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:30.186 Verification LBA range: start 0x0 length 0x200 00:18:30.186 raid5f : 5.27 433.59 27.10 0.00 0.00 7285671.04 219.11 316862.27 00:18:30.187 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:30.187 Verification LBA range: start 0x200 length 0x200 00:18:30.187 raid5f : 5.28 432.48 27.03 0.00 0.00 7340318.48 153.82 316862.27 00:18:30.187 [2024-10-13T02:32:48.871Z] =================================================================================================================== 00:18:30.187 [2024-10-13T02:32:48.871Z] Total : 866.07 54.13 0.00 0.00 7312994.76 153.82 316862.27 00:18:30.444 00:18:30.444 real 0m6.012s 00:18:30.444 user 0m11.209s 00:18:30.444 sys 0m0.232s 00:18:30.444 02:32:49 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:30.444 02:32:49 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.444 ************************************ 00:18:30.444 END TEST bdev_verify_big_io 00:18:30.444 ************************************ 00:18:30.444 02:32:49 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:30.444 02:32:49 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:18:30.444 02:32:49 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:30.444 02:32:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:30.444 ************************************ 00:18:30.444 START TEST bdev_write_zeroes 00:18:30.444 ************************************ 00:18:30.444 02:32:49 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:30.701 [2024-10-13 02:32:49.152282] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:30.701 [2024-10-13 02:32:49.152406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100819 ] 00:18:30.701 [2024-10-13 02:32:49.287021] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.701 [2024-10-13 02:32:49.338097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.960 Running I/O for 1 seconds... 00:18:31.896 26583.00 IOPS, 103.84 MiB/s 00:18:31.896 Latency(us) 00:18:31.896 [2024-10-13T02:32:50.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.896 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:31.896 raid5f : 1.01 26569.08 103.79 0.00 0.00 4802.62 1380.83 6496.36 00:18:31.896 [2024-10-13T02:32:50.580Z] =================================================================================================================== 00:18:31.896 [2024-10-13T02:32:50.580Z] Total : 26569.08 103.79 0.00 0.00 4802.62 1380.83 6496.36 00:18:32.155 00:18:32.155 real 0m1.718s 00:18:32.155 user 0m1.391s 00:18:32.155 sys 0m0.206s 00:18:32.155 02:32:50 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:32.155 02:32:50 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:32.155 ************************************ 00:18:32.155 END TEST bdev_write_zeroes 00:18:32.155 ************************************ 00:18:32.414 02:32:50 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:32.414 02:32:50 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:18:32.414 02:32:50 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:32.414 02:32:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:32.414 ************************************ 00:18:32.414 START TEST bdev_json_nonenclosed 00:18:32.414 ************************************ 00:18:32.414 02:32:50 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:32.414 [2024-10-13 02:32:50.940920] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:32.414 [2024-10-13 02:32:50.941048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100861 ] 00:18:32.414 [2024-10-13 02:32:51.076627] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.673 [2024-10-13 02:32:51.128295] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.673 [2024-10-13 02:32:51.128403] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:32.673 [2024-10-13 02:32:51.128425] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:32.673 [2024-10-13 02:32:51.128439] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:32.673 00:18:32.673 real 0m0.387s 00:18:32.673 user 0m0.182s 00:18:32.673 sys 0m0.101s 00:18:32.673 02:32:51 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:32.673 02:32:51 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:32.673 ************************************ 00:18:32.673 END TEST bdev_json_nonenclosed 00:18:32.673 ************************************ 00:18:32.673 02:32:51 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:32.673 02:32:51 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:18:32.673 02:32:51 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:32.673 02:32:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:32.673 ************************************ 00:18:32.673 START TEST bdev_json_nonarray 00:18:32.673 ************************************ 00:18:32.673 02:32:51 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:32.931 [2024-10-13 02:32:51.388020] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:32.931 [2024-10-13 02:32:51.388147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100887 ] 00:18:32.931 [2024-10-13 02:32:51.522289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.931 [2024-10-13 02:32:51.572025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.931 [2024-10-13 02:32:51.572150] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:32.931 [2024-10-13 02:32:51.572173] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:32.931 [2024-10-13 02:32:51.572188] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:33.191 00:18:33.191 real 0m0.378s 00:18:33.191 user 0m0.173s 00:18:33.191 sys 0m0.102s 00:18:33.191 02:32:51 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:33.191 02:32:51 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:33.191 ************************************ 00:18:33.191 END TEST bdev_json_nonarray 00:18:33.191 ************************************ 00:18:33.191 02:32:51 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:18:33.191 02:32:51 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:18:33.191 02:32:51 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:18:33.191 02:32:51 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:18:33.191 02:32:51 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:18:33.191 02:32:51 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:33.191 02:32:51 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:33.191 02:32:51 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:18:33.191 02:32:51 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:18:33.191 02:32:51 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:18:33.191 02:32:51 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:18:33.191 00:18:33.191 real 0m34.983s 00:18:33.191 user 0m47.916s 00:18:33.191 sys 0m4.546s 00:18:33.191 02:32:51 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:33.191 02:32:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:33.191 ************************************ 00:18:33.191 END TEST blockdev_raid5f 00:18:33.191 ************************************ 00:18:33.191 02:32:51 -- spdk/autotest.sh@194 -- # uname -s 00:18:33.191 02:32:51 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:18:33.191 02:32:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:33.191 02:32:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:33.191 02:32:51 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:18:33.191 02:32:51 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:18:33.191 02:32:51 -- spdk/autotest.sh@256 -- # timing_exit lib 00:18:33.191 02:32:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:33.191 02:32:51 -- common/autotest_common.sh@10 -- # set +x 00:18:33.191 02:32:51 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:18:33.191 02:32:51 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:18:33.191 02:32:51 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:18:33.191 02:32:51 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:18:33.191 02:32:51 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:33.191 02:32:51 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:33.191 02:32:51 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:18:33.191 02:32:51 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:18:33.191 02:32:51 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:18:33.191 02:32:51 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:33.191 02:32:51 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:18:33.191 02:32:51 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:18:33.191 02:32:51 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:18:33.191 02:32:51 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:18:33.191 02:32:51 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:18:33.191 02:32:51 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:18:33.191 02:32:51 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:18:33.191 02:32:51 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:18:33.191 02:32:51 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:18:33.191 02:32:51 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:18:33.191 02:32:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:33.191 02:32:51 -- common/autotest_common.sh@10 -- # set +x 00:18:33.191 02:32:51 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:18:33.191 02:32:51 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:18:33.191 02:32:51 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:18:33.191 02:32:51 -- common/autotest_common.sh@10 -- # set +x 00:18:35.736 INFO: APP EXITING 00:18:35.736 INFO: killing all VMs 00:18:35.736 INFO: killing vhost app 00:18:35.736 INFO: EXIT DONE 00:18:35.736 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:35.736 Waiting for block devices as requested 00:18:35.736 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:35.996 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:36.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:36.934 Cleaning 00:18:36.934 Removing: /var/run/dpdk/spdk0/config 00:18:36.934 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:18:36.934 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:18:36.934 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:18:36.934 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:18:36.934 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:18:36.934 Removing: /var/run/dpdk/spdk0/hugepage_info 00:18:36.934 Removing: /dev/shm/spdk_tgt_trace.pid68988 00:18:36.934 Removing: /var/run/dpdk/spdk0 00:18:36.934 Removing: /var/run/dpdk/spdk_pid100203 00:18:36.934 Removing: /var/run/dpdk/spdk_pid100243 00:18:36.934 Removing: /var/run/dpdk/spdk_pid100274 00:18:36.934 Removing: /var/run/dpdk/spdk_pid100498 00:18:36.934 Removing: /var/run/dpdk/spdk_pid100661 00:18:36.934 Removing: /var/run/dpdk/spdk_pid100742 00:18:36.934 Removing: /var/run/dpdk/spdk_pid100819 00:18:36.934 Removing: /var/run/dpdk/spdk_pid100861 00:18:36.934 Removing: /var/run/dpdk/spdk_pid100887 00:18:36.934 Removing: /var/run/dpdk/spdk_pid68825 00:18:36.934 Removing: /var/run/dpdk/spdk_pid68988 00:18:36.934 Removing: /var/run/dpdk/spdk_pid69195 00:18:36.934 Removing: /var/run/dpdk/spdk_pid69283 00:18:36.934 Removing: /var/run/dpdk/spdk_pid69306 00:18:36.934 Removing: /var/run/dpdk/spdk_pid69423 00:18:36.934 Removing: /var/run/dpdk/spdk_pid69441 00:18:36.934 Removing: /var/run/dpdk/spdk_pid69618 00:18:36.934 Removing: /var/run/dpdk/spdk_pid69697 00:18:36.934 Removing: /var/run/dpdk/spdk_pid69782 00:18:36.934 Removing: /var/run/dpdk/spdk_pid69882 00:18:36.934 Removing: /var/run/dpdk/spdk_pid69957 00:18:36.934 Removing: /var/run/dpdk/spdk_pid70002 00:18:36.934 Removing: /var/run/dpdk/spdk_pid70033 00:18:36.934 Removing: /var/run/dpdk/spdk_pid70104 00:18:36.934 Removing: /var/run/dpdk/spdk_pid70215 00:18:36.934 Removing: /var/run/dpdk/spdk_pid70642 00:18:36.934 Removing: /var/run/dpdk/spdk_pid70690 00:18:36.934 Removing: /var/run/dpdk/spdk_pid70742 00:18:36.934 Removing: /var/run/dpdk/spdk_pid70759 00:18:36.934 Removing: /var/run/dpdk/spdk_pid70817 00:18:36.934 Removing: /var/run/dpdk/spdk_pid70833 00:18:36.934 Removing: /var/run/dpdk/spdk_pid70902 00:18:36.934 Removing: /var/run/dpdk/spdk_pid70918 00:18:36.934 Removing: /var/run/dpdk/spdk_pid70960 00:18:36.934 Removing: /var/run/dpdk/spdk_pid70978 00:18:36.934 Removing: /var/run/dpdk/spdk_pid71028 00:18:36.934 Removing: /var/run/dpdk/spdk_pid71040 00:18:36.934 Removing: /var/run/dpdk/spdk_pid71178 00:18:36.934 Removing: /var/run/dpdk/spdk_pid71209 00:18:36.934 Removing: /var/run/dpdk/spdk_pid71298 00:18:36.934 Removing: /var/run/dpdk/spdk_pid72466 00:18:36.934 Removing: /var/run/dpdk/spdk_pid72672 00:18:36.934 Removing: /var/run/dpdk/spdk_pid72801 00:18:36.934 Removing: /var/run/dpdk/spdk_pid73406 00:18:36.934 Removing: /var/run/dpdk/spdk_pid73601 00:18:37.193 Removing: /var/run/dpdk/spdk_pid73730 00:18:37.193 Removing: /var/run/dpdk/spdk_pid74340 00:18:37.193 Removing: /var/run/dpdk/spdk_pid74658 00:18:37.193 Removing: /var/run/dpdk/spdk_pid74788 00:18:37.193 Removing: /var/run/dpdk/spdk_pid76129 00:18:37.193 Removing: /var/run/dpdk/spdk_pid76371 00:18:37.193 Removing: /var/run/dpdk/spdk_pid76500 00:18:37.193 Removing: /var/run/dpdk/spdk_pid77841 00:18:37.193 Removing: /var/run/dpdk/spdk_pid78082 00:18:37.193 Removing: /var/run/dpdk/spdk_pid78212 00:18:37.193 Removing: /var/run/dpdk/spdk_pid79548 00:18:37.193 Removing: /var/run/dpdk/spdk_pid79982 00:18:37.193 Removing: /var/run/dpdk/spdk_pid80117 00:18:37.193 Removing: /var/run/dpdk/spdk_pid81552 00:18:37.193 Removing: /var/run/dpdk/spdk_pid81806 00:18:37.193 Removing: /var/run/dpdk/spdk_pid81936 00:18:37.193 Removing: /var/run/dpdk/spdk_pid83378 00:18:37.193 Removing: /var/run/dpdk/spdk_pid83633 00:18:37.193 Removing: /var/run/dpdk/spdk_pid83764 00:18:37.193 Removing: /var/run/dpdk/spdk_pid85205 00:18:37.193 Removing: /var/run/dpdk/spdk_pid85682 00:18:37.193 Removing: /var/run/dpdk/spdk_pid85811 00:18:37.193 Removing: /var/run/dpdk/spdk_pid85944 00:18:37.193 Removing: /var/run/dpdk/spdk_pid86353 00:18:37.193 Removing: /var/run/dpdk/spdk_pid87073 00:18:37.193 Removing: /var/run/dpdk/spdk_pid87457 00:18:37.193 Removing: /var/run/dpdk/spdk_pid88130 00:18:37.193 Removing: /var/run/dpdk/spdk_pid88554 00:18:37.193 Removing: /var/run/dpdk/spdk_pid89290 00:18:37.193 Removing: /var/run/dpdk/spdk_pid89681 00:18:37.193 Removing: /var/run/dpdk/spdk_pid91597 00:18:37.193 Removing: /var/run/dpdk/spdk_pid92024 00:18:37.193 Removing: /var/run/dpdk/spdk_pid92448 00:18:37.193 Removing: /var/run/dpdk/spdk_pid94477 00:18:37.193 Removing: /var/run/dpdk/spdk_pid94956 00:18:37.193 Removing: /var/run/dpdk/spdk_pid95458 00:18:37.193 Removing: /var/run/dpdk/spdk_pid96492 00:18:37.193 Removing: /var/run/dpdk/spdk_pid96805 00:18:37.193 Removing: /var/run/dpdk/spdk_pid97723 00:18:37.193 Removing: /var/run/dpdk/spdk_pid98041 00:18:37.193 Removing: /var/run/dpdk/spdk_pid98965 00:18:37.193 Removing: /var/run/dpdk/spdk_pid99277 00:18:37.193 Removing: /var/run/dpdk/spdk_pid99950 00:18:37.193 Clean 00:18:37.193 02:32:55 -- common/autotest_common.sh@1451 -- # return 0 00:18:37.193 02:32:55 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:18:37.193 02:32:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:37.193 02:32:55 -- common/autotest_common.sh@10 -- # set +x 00:18:37.451 02:32:55 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:18:37.451 02:32:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:37.451 02:32:55 -- common/autotest_common.sh@10 -- # set +x 00:18:37.451 02:32:55 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:37.451 02:32:55 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:18:37.451 02:32:55 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:18:37.451 02:32:55 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:18:37.451 02:32:55 -- spdk/autotest.sh@394 -- # hostname 00:18:37.452 02:32:55 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:18:37.710 geninfo: WARNING: invalid characters removed from testname! 00:18:59.696 02:33:17 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:01.604 02:33:19 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:03.523 02:33:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:05.430 02:33:23 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:07.341 02:33:25 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:09.251 02:33:27 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:11.159 02:33:29 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:19:11.159 02:33:29 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:19:11.159 02:33:29 -- common/autotest_common.sh@1681 -- $ lcov --version 00:19:11.159 02:33:29 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:19:11.419 02:33:29 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:19:11.419 02:33:29 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:19:11.419 02:33:29 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:19:11.419 02:33:29 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:19:11.419 02:33:29 -- scripts/common.sh@336 -- $ IFS=.-: 00:19:11.419 02:33:29 -- scripts/common.sh@336 -- $ read -ra ver1 00:19:11.419 02:33:29 -- scripts/common.sh@337 -- $ IFS=.-: 00:19:11.419 02:33:29 -- scripts/common.sh@337 -- $ read -ra ver2 00:19:11.419 02:33:29 -- scripts/common.sh@338 -- $ local 'op=<' 00:19:11.419 02:33:29 -- scripts/common.sh@340 -- $ ver1_l=2 00:19:11.419 02:33:29 -- scripts/common.sh@341 -- $ ver2_l=1 00:19:11.419 02:33:29 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:19:11.419 02:33:29 -- scripts/common.sh@344 -- $ case "$op" in 00:19:11.419 02:33:29 -- scripts/common.sh@345 -- $ : 1 00:19:11.419 02:33:29 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:19:11.419 02:33:29 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:11.419 02:33:29 -- scripts/common.sh@365 -- $ decimal 1 00:19:11.419 02:33:29 -- scripts/common.sh@353 -- $ local d=1 00:19:11.419 02:33:29 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:19:11.419 02:33:29 -- scripts/common.sh@355 -- $ echo 1 00:19:11.419 02:33:29 -- scripts/common.sh@365 -- $ ver1[v]=1 00:19:11.419 02:33:29 -- scripts/common.sh@366 -- $ decimal 2 00:19:11.419 02:33:29 -- scripts/common.sh@353 -- $ local d=2 00:19:11.419 02:33:29 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:19:11.419 02:33:29 -- scripts/common.sh@355 -- $ echo 2 00:19:11.419 02:33:29 -- scripts/common.sh@366 -- $ ver2[v]=2 00:19:11.419 02:33:29 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:19:11.419 02:33:29 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:19:11.419 02:33:29 -- scripts/common.sh@368 -- $ return 0 00:19:11.419 02:33:29 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:11.419 02:33:29 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:19:11.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.419 --rc genhtml_branch_coverage=1 00:19:11.419 --rc genhtml_function_coverage=1 00:19:11.419 --rc genhtml_legend=1 00:19:11.419 --rc geninfo_all_blocks=1 00:19:11.419 --rc geninfo_unexecuted_blocks=1 00:19:11.419 00:19:11.419 ' 00:19:11.419 02:33:29 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:19:11.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.419 --rc genhtml_branch_coverage=1 00:19:11.419 --rc genhtml_function_coverage=1 00:19:11.419 --rc genhtml_legend=1 00:19:11.419 --rc geninfo_all_blocks=1 00:19:11.419 --rc geninfo_unexecuted_blocks=1 00:19:11.419 00:19:11.419 ' 00:19:11.419 02:33:29 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:19:11.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.419 --rc genhtml_branch_coverage=1 00:19:11.419 --rc genhtml_function_coverage=1 00:19:11.419 --rc genhtml_legend=1 00:19:11.419 --rc geninfo_all_blocks=1 00:19:11.419 --rc geninfo_unexecuted_blocks=1 00:19:11.419 00:19:11.419 ' 00:19:11.419 02:33:29 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:19:11.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.419 --rc genhtml_branch_coverage=1 00:19:11.419 --rc genhtml_function_coverage=1 00:19:11.419 --rc genhtml_legend=1 00:19:11.419 --rc geninfo_all_blocks=1 00:19:11.419 --rc geninfo_unexecuted_blocks=1 00:19:11.419 00:19:11.419 ' 00:19:11.419 02:33:29 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:11.419 02:33:29 -- scripts/common.sh@15 -- $ shopt -s extglob 00:19:11.419 02:33:29 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:19:11.419 02:33:29 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.419 02:33:29 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.419 02:33:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.419 02:33:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.419 02:33:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.420 02:33:29 -- paths/export.sh@5 -- $ export PATH 00:19:11.420 02:33:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.420 02:33:29 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:19:11.420 02:33:29 -- common/autobuild_common.sh@479 -- $ date +%s 00:19:11.420 02:33:29 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1728786809.XXXXXX 00:19:11.420 02:33:29 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1728786809.2cYm0T 00:19:11.420 02:33:29 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:19:11.420 02:33:29 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:19:11.420 02:33:29 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:19:11.420 02:33:29 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:19:11.420 02:33:29 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:19:11.420 02:33:29 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:19:11.420 02:33:29 -- common/autobuild_common.sh@495 -- $ get_config_params 00:19:11.420 02:33:29 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:19:11.420 02:33:29 -- common/autotest_common.sh@10 -- $ set +x 00:19:11.420 02:33:29 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:19:11.420 02:33:29 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:19:11.420 02:33:29 -- pm/common@17 -- $ local monitor 00:19:11.420 02:33:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:11.420 02:33:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:11.420 02:33:29 -- pm/common@25 -- $ sleep 1 00:19:11.420 02:33:29 -- pm/common@21 -- $ date +%s 00:19:11.420 02:33:29 -- pm/common@21 -- $ date +%s 00:19:11.420 02:33:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728786809 00:19:11.420 02:33:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728786809 00:19:11.420 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728786809_collect-cpu-load.pm.log 00:19:11.420 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728786809_collect-vmstat.pm.log 00:19:12.359 02:33:30 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:19:12.359 02:33:30 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:19:12.359 02:33:30 -- spdk/autopackage.sh@14 -- $ timing_finish 00:19:12.359 02:33:30 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:19:12.359 02:33:30 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:19:12.359 02:33:30 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:12.359 02:33:31 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:19:12.359 02:33:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:19:12.359 02:33:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:19:12.359 02:33:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:12.359 02:33:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:19:12.359 02:33:31 -- pm/common@44 -- $ pid=102396 00:19:12.359 02:33:31 -- pm/common@50 -- $ kill -TERM 102396 00:19:12.359 02:33:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:12.359 02:33:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:19:12.359 02:33:31 -- pm/common@44 -- $ pid=102398 00:19:12.359 02:33:31 -- pm/common@50 -- $ kill -TERM 102398 00:19:12.359 + [[ -n 6161 ]] 00:19:12.359 + sudo kill 6161 00:19:12.629 [Pipeline] } 00:19:12.645 [Pipeline] // timeout 00:19:12.650 [Pipeline] } 00:19:12.667 [Pipeline] // stage 00:19:12.672 [Pipeline] } 00:19:12.691 [Pipeline] // catchError 00:19:12.700 [Pipeline] stage 00:19:12.703 [Pipeline] { (Stop VM) 00:19:12.714 [Pipeline] sh 00:19:12.998 + vagrant halt 00:19:15.547 ==> default: Halting domain... 00:19:23.693 [Pipeline] sh 00:19:23.977 + vagrant destroy -f 00:19:26.530 ==> default: Removing domain... 00:19:26.542 [Pipeline] sh 00:19:26.826 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:19:26.836 [Pipeline] } 00:19:26.851 [Pipeline] // stage 00:19:26.856 [Pipeline] } 00:19:26.870 [Pipeline] // dir 00:19:26.875 [Pipeline] } 00:19:26.889 [Pipeline] // wrap 00:19:26.895 [Pipeline] } 00:19:26.908 [Pipeline] // catchError 00:19:26.918 [Pipeline] stage 00:19:26.920 [Pipeline] { (Epilogue) 00:19:26.932 [Pipeline] sh 00:19:27.217 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:19:31.426 [Pipeline] catchError 00:19:31.428 [Pipeline] { 00:19:31.440 [Pipeline] sh 00:19:31.723 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:19:31.723 Artifacts sizes are good 00:19:31.732 [Pipeline] } 00:19:31.747 [Pipeline] // catchError 00:19:31.759 [Pipeline] archiveArtifacts 00:19:31.768 Archiving artifacts 00:19:31.874 [Pipeline] cleanWs 00:19:31.886 [WS-CLEANUP] Deleting project workspace... 00:19:31.887 [WS-CLEANUP] Deferred wipeout is used... 00:19:31.893 [WS-CLEANUP] done 00:19:31.894 [Pipeline] } 00:19:31.905 [Pipeline] // stage 00:19:31.909 [Pipeline] } 00:19:31.918 [Pipeline] // node 00:19:31.921 [Pipeline] End of Pipeline 00:19:31.950 Finished: SUCCESS